AI Sommelier — tailored wine pairings before LLMs were cool
Winyfy was a mobile “AI sommelier” designed before ChatGPT or Gemini existed. Behind the friendly chat interface was a custom conversational engine and an ML layer trained on wine, food, and contextual data.
Winyfy
·
2017
·
AI Agent



Challenge
Most people like the idea of “good wine”, but very few enjoy digging through filters, appellations, and sommelier jargon. Choosing a bottle for a dinner, date or party is often stressful, time-constrained and full of second-guessing.
- understands what the user is trying to do (occasion, food, budget, mood),
- translates that into wine parameters behind the scenes,
- surfaces only a small number of relevant, available options,
- remains playful and light rather than intimidating.
This required combining a domain-heavy knowledge base, proprietary ML models, and mobile commerce flows into a single, coherent product rather than three separate systems.
Discovery
We started with a discovery phase focused on behaviour, not on hype. Through market research, customer journeys, and user testing, we mapped:
- how people currently choose wine in supermarkets, restaurants, and online,
- where they feel most insecure (naming, labels, food pairing, price),
- how time pressure and social context influence decisions.
The key insight was that people rarely want a wine lecture. They want a confident shortcut that still feels like their choice. That shaped how we designed the assistant’s role in the interaction.
Designing the AI sommelier
From a UX perspective, Winyfy was not just a chatbot with some buttons; it was a conversational agent with a defined model of the world. We designed it to ask a small number of well chosen questions about food, occasion, guests and budget, then use a proprietary ML model to infer mood and tone from wording and emoji, map that to wine attributes such as style, dryness, region or how adventurous the choice should be and narrow everything down to a small, relevant shortlist instead of a catalogue.
My work focused on defining the conversation flows and fallback paths for moments when the model was uncertain, balancing personality with clarity so replies felt witty but never confusing and shaping the transition from recommendation to real action, whether that meant finding a bottle in store, ordering online, or saving it for later. We kept the UI intentionally minimal, with chat as the primary surface and only a few compact components for cards, rating,s and quick replies, so that the complexity stayed in the underlying ML and knowledge graph rather than on the screen.
Mood, personality and trust
A core part of the experience was the assistant’s personality. Winyfy could be slightly snarky at times, but never at the expense of trust.
The ML layer tried to detect user mood from language, pacing, and context, and we reflected that in microcopy:
- more direct and efficient if the user seemed rushed,
- more playful and relaxed if the conversation allowed it.
We tested different tones in user sessions and iterated until the assistant felt like “someone you would actually ask for advice” rather than a scripted FAQ bot.
Connecting recommendations with real world availability
A recommendation is only helpful if you can act on it.
That meant tightly integrating the assistant with inventory and commerce:
- if a recommended wine was available nearby, Winyfy surfaced local stores with stock,
- if not, it checked partner online stores and offered a one-tap purchase within the app,
Premium brands could attach rewards or perks, and the assistant surfaced them contextually rather than spamming.
My work here consisted of designing flows that did not interrupt the user's conversation; the transition from recommendation to purchase was short and predictable. It did not require “switching modes” from chatbot to classic store.
Making it feel fun, not transactional
To keep the experience light, we gave Winyfy a visual and emotional toolkit.
The point was not “random fun”, but lowering the barrier to starting a conversation and making the assistant feel less stiff without losing credibility.
Final thoughts and impact
Even without today’s LLM infrastructure, we turned a dense, domain-heavy problem into a conversational experience that shortened the path from “I need a wine” to a confident choice, reduced the need to scan endless lists and filters, and helped partner stores and brands surface the right bottles at the right moment instead of fighting for shelf space alone.
For me, Winyfy was an early, efficient lesson in designing agentic, AI-enhanced workflows before generative AI went mainstream. It taught me to create the assistant’s mental model and states before the UI, keep conversations focused on decisions rather than information dumps, and always connect AI output to real-world actions and constraints, such as inventory, payments, and location. I still see it as one of my first explorations of what we now call AI agents, built at a time when most of the intelligence had to be assembled by hand.
Let's connect
Ask me about
Cześć!
Built with
Powered by Semplice.
Since 1998