AI, UX Design, Designing for AI
How do we design compelling AI experiences? Spoiler: the answer is not “slap a chat box on it and call it a day”.
We’re past the novelty stage, but nowhere near maturity. People are used to text boxes and natural language prompts, but AI is now multimodal, messy, and evolving faster than most organisations can keep up.
A few home truths:
• Shadow AI is everywhere - people quietly use tools their company hasn’t approved.
• Many models are overly flattering (nice for the ego, terrible for accuracy).
• Users trust AI too much, often without understanding its limitations.
• Most AI products today aren’t “done” - they’re experiments in public.
“Why AI?”
No. Stop. This should not be the first question you start with. Some better suggestions for you include:
1. What’s your business model?
2. How well do you understand your users’ problems?
3. Where are the unmet needs, challenges or real opportunities?
Only then should you ask:
“Does AI genuinely help here?”
We know that AI isn’t cheap. Data, talent and infrastructure are costly resources. So, if a feature isn’t improving value, reducing cost, or opening new doors, its decoration, not strategy.
A simple pattern for building an AI strategy:
• Identify high-value user/business problems
• Define scope (internal, external, transformational)
• Assign cross-functional ownership (design, technical, leadership, etc)
And have a plan for when things underperform. Do not treat every AI wobble like a crisis.
I’ve always been an advocate for designers to have technical literacy throughout my career. It’s helped me have more informed conversations with developers, understand technical limitations and help to design something that’s just, better. This in my opinion is no different to AI. I’m not saying you need to have a PHD but understand the basics:
• LLMs are probabilistic - they guess the next token, so responses vary,
• That means new UX responsibilities: handling errors, variability, partial correctness.
• Tokens matter — they affect cost, capability and usability. A token limit is a UX moment, not a technical footnote.
Tokens? When an AI gets a prompt, it breaks the text prompt down into smaller ‘tokens’ through tokenization. Simple words can be single tokens; longer words are often split into smaller tokens ([un-] [breakable] for example), even characters or punctuation can be tokens.
A nice observation from the course was the idea of AI Literacy Paradox:
Low AI literacy → AI feels magical, almost human. People are very receptive.
High AI literacy → You see the limitations clearly and can get quite cynical.
Our challenge as designers is trying to sit somewhere in the middle.
Yes, LLMs are great for code. Yes, diffusion models are powering images. Yes, vision is better than ever. Fully autonomous agents, however? Not quite… LLM’s aren’t ready to be fully autonomous agents just yet. Who would’ve thought! (I’m being sarcastic if you couldn’t tell)
That means designing for supervision, confirmation, and user control. Especially for high-stakes tasks.
When teams jump into AI, simple questions get skipped. We shouldn’t get caught up in the AI hype train without asking :
• What data is this trained on?
• How will user data be used?
• Are we mitigating bias?
• Should we use proprietary or open models?
If the data isn’t ready, the product won’t be either.
Wizard-of-Oz testing (where a human is secretly simulating the AI behind the scenes) still works, but you need more participants. Different outputs for the same prompt matter - the same prompt can produce a different answer every time. You’re also testing reactions to variability as much as the interface
It’s less “can they click it?” and more “how do they cope with unpredictability?”.
Users bring mental models from ChatGPT, Midjourney and everywhere else. If your AI behaves differently but looks similar, you’ll break trust fast.
Users don’t (usually) care how clever the model is, they care what it does when they interact with it. So, what are some ways we can improve explainability without overwhelming users?
• Scope the feature clearly - “ask me anything” is rarely a good idea, obviously
• Add small contextual explanations
• Show why users are seeing a recommendation
• Consider confidence levels if they help
• Avoid fake “reasoning traces”, they mislead more than they inform
A few simple design choices go a long way, and these aren’t uncommon UX patterns that you shouldn’t already be familiar with. Think:
• Letting people opt in/out of data use
• Provide multiple outputs when appropriate
• Adding a clear stop button for long operations
• Keeping feedback lightweight and optional
• Making positive feedback one click
• Adding nuance to negative feedback (but don’t force it!)
Chat remains the dominant pattern, but it needs care. Stop mixing search and chat and offer helpful suggested prompts. If you are using a chat interface make it clear that they’re talking to a human or an AI.
You should also try to avoid showing “thinking” animations that imply logic that doesn’t exist. With AI, indeterminate progress is often all you can show. The system can’t reliably tell you “3 seconds left” or “70% done”. The temptation is to fill that time with something flashy like “reasoning traces” – step-by-step “thoughts” the model appears to be having.
Indeterminate progress indicators are fine and they’re honest. Fake transparency isn’t.
AI UX isn’t just “normal UX plus a model”. It’s a different beast – more probabilistic, more opaque, and more prone to hype.
The good news is that the basics still matter, understand the problem, know your users, be honest about limitations, and design for trust.
The AI bit is just another tool in the belt.
Back to blogs