A human-centered approach to building AI solutions that actually work
Here’s a statistic that might surprise you:
70 to 85% of AI projects fail.
It sounds inflated—until you’ve lived it. Until you’ve watched something technically sound fall flat because users don’t need it, teams can’t align, or the product solves the wrong problem entirely.
Most people assume these failures are due to the tech not being “ready.” But more often than not, the problem isn’t the technology at all.
These projects usually fail for much more mundane reasons:
-
- Unclear objectives
-
- Poor integration with real workflows
-
- Incomplete or inaccessible data
-
- A lack of shared understanding across teams
In other words: the failure points aren’t technical—they’re human. And that also means the fix is human.That’s why the very first thing I do on any AI project isn’t code related—it’s run a workshop. Not a theoretical brainstorm. Not a generic discovery session. A hands-on, four-part workshop designed to uncover real pain points, map the current experience, identify where AI could actually help, and surface ethical risks before they become expensive problems.
This workshop is the backbone of how I help teams move from fuzzy ambition to focused, usable, and trustworthy AI. In this post, I’ll walk through how human-centered design addresses the real reasons AI projects fail—and how this workshop gives you a foundation that actually works.
Fixing What Actually Goes Wrong
I mean: Start with the humans.
-
- What do they need?
-
- Where do they struggle?
-
- What’s actually worth solving?
Human-centered design shifts the entire trajectory of an AI project. It helps you avoid common traps and build something that not only works—but matters. Let’s break down how it addresses the biggest failure points.
Unclear Objectives → Clear Outcomes
The most common trap I see?
Teams kicking off an initiative with a vague, open-ended question:
“How can we use AI?”
It’s well-intentioned—but it’s the wrong question.
Human-centered design reframes it as:
“What are the real problems people are facing in their work?”
“Which of those problems are painful enough to solve?”
“What outcomes would actually make a difference?”
I run structured pain point discovery workshops where we get specific, fast. Participants aren’t theorizing about AI—they’re talking about what frustrates them day to day. From there, we prioritize the problems based on urgency, effort, and impact. The result? Clear, grounded objectives. No hand-waving.
Poor Integration → Real-World Fit
AI doesn’t fail because it’s inaccurate. It fails because it doesn’t fit into the way people actually work. Experience mapping fixes that. We walk through how a process happens in the real world:
-
- Step by step
-
- Tool by tool
-
- Friction point by friction point
The goal here isn’t to document every detail. It’s to get a shared view of what’s happening, where the headaches are, and how things actually get done.
This map becomes your compass. It shows where AI could reduce friction—and where it’d just add complexity. And importantly, it shows how to slot AI into the existing ecosystem rather than expecting people to reinvent their workflow around it.
Tech-Only Thinking → Ethical, Usable Systems
You can have the most sophisticated model in the world, but if it’s not usable—or worse, not trustworthy—it’s going nowhere. That’s why human-centered design brings in ethics and usability early.
I don’t mean a big ethics review at the end. I mean building ethical reflection into the actual design process—layered into every conversation about automation, data use, and decision-making.
When you embed ethics from the start, you get better conversations, better design, and far fewer surprises later on.
The Four-Part Workshop That Lays the Groundwork
Here’s how it works:
- Pain Point Discovery
- Experience Mapping
- AI Solution Mapping
- Ethical Evaluation
And it’s not four separate steps—it’s one connected flow that builds momentum and clarity with each layer.
1. Pain Point Discovery: Naming What’s Actually Broken
You’d be surprised how often the real problems aren’t the ones leaders think they are. That’s why we start by creating space for teams to name their own frustrations. I pair people up have them ask each other questions—5 to 10 minutes each. Nothing fancy. Just:
“What’s frustrating about your workflow?”
“Where do you waste the most time?”
“What’s the part of your job you dread every week?”
We collect those pain points, synthesize themes, and prioritize as a group. What’s burning? What’s background noise? What’s business-critical? By the end, we’re no longer guessing where to focus—we know.
2. Experience Mapping: Seeing the Whole Picture
Once we know the “what,” we need to understand the “how.” Experience mapping breaks down each prioritized theme into a full user journey:
-
- What happens, step by step?
-
- What tools are in play?
-
- When does the pain show up?
-
- Who’s involved, and what decisions are being made?
-
- What systems are used?
This isn’t about process for process’s sake. It’s about making invisible complexity visible. Once you see the system laid out, opportunities jump out at you. It becomes obvious where AI could help—and where it might actually make things worse.
3. AI Solution Mapping: Ideas in Context
Here’s the shift: instead of brainstorming AI ideas off in a vacuum, we map them directly onto the user journey.
As we walk through the workflow, we ask:
“Where could AI realistically help here?”
“Would it eliminate a repetitive task, speed up decision-making, flag errors?”
This keeps the conversation grounded. You’re not imagining what AI could do—you’re seeing what it should do, in context.
I’ve seen this step completely change how organizations and employees feel about AI. Fear of having AI take over their jobs turns into curiosity. “Oh—you mean I don’t have to manually summarize all those reports anymore?”
Exactly.
4. Ethical Evaluation: Embedded, Not Extra
For every AI opportunity we map, we also map its potential risks.
We ask:
-
- Could bias creep in here?
-
- Who might be unintentionally excluded?
-
- Is human oversight needed at this step?
These aren’t philosophical debates—they’re design decisions. And when they’re baked into the same map as everything else, they stay top of mind. You don’t need a 20-page ethics memo. You need the right questions in the room, early and often.
A Practical Start That Works
1. Start with Real Human Needs
If the problem doesn’t matter to people, the solution won’t matter either.
2. Treat Ethics as Design
Don’t outsource ethics to a separate group. Make it a part of your product thinking.
3. Fit Into the Flow
AI works best when you understand how it fits into or changes existing processes—not when it forces people to change everything or just use another tool.
At the End of the Day
You don’t need more AI hype.
You need a process that helps you figure out:
-
- What’s worth solving
-
- What’s feasible
-
- And what’s responsible
Human-centered design gives you that. It keeps your team aligned, your priorities grounded, and your solutions actually usable. The tech will always evolve. But the teams that start with people? They’ll build the things that last.
Reference: 85% failure is from a Gartner study cited in a Forbes article and has since been mentioned across the web.
Want to bring this workshop into your org? I facilitate both virtual and in-person sessions. They typically take 4–6 hours and will save you months of rework. Drop me a note if you want to learn more.

