← Back to Field Notes

The MVP vs. The learning we needed

A few years ago, I gave a talk in front of my colleagues at our internal Agile Day and confidently outlined the tensions between UX practice and agile delivery. I had a slide about MVP (Minimal Viable Product) expectations. Another about metrics. A third about research bottlenecks. I thought I was offering solutions.

The conversations afterward told a different story. People didn't push back on my observations—they already knew these problems existed. What they wanted to talk about was why. Why did every team seem to repeat the same patterns? Why did we keep building MVPs that satisfied no one—not the users, not the stakeholders, not the teams building them?

I've been sitting with those questions for the past year, watching our experience design practitioners navigate these tensions across different product teams. And I think the answer has less to do with methodology and more to do with what our organisations reward.

When We Talk About “MVP”…

I've seen this play out repeatedly: a team gets a fixed timeline and budget. The stakeholder needs to show value before the next funding decision. The product manager wants to ship features quickly—we can always iterate later, right? And the XD practitioner is trying to carve out time for discovery that no one believes the team can afford.

Everyone uses the same term—"MVP"—but they're describing completely different things.

The stakeholder is thinking in outputs: how many features can we deliver to justify the investment? The PM is thinking in delivery velocity: how quickly can we ship something, anything, to maintain momentum? The XD practitioner is thinking in learning: what's the smallest thing we could build to test our riskiest assumptions?

These different definitions rarely get reconciled. So teams compromise, which in practice means building a feature set that's simultaneously too much and too little. Too much surface area to do any of it well. Too little depth to actually solve user problems. The classic "MVP" that's minimal in quality rather than scope.

And then comes that phrase: "We can always iterate later."

Later never comes.

The Symptom vs. The System

Here's what I've observed watching our XD practitioners work across multiple product teams: when stakeholders push for broad feature sets before doing research, they're rarely being unreasonable. They're responding rationally to irrational constraints.

Short planning cycles that demand visible progress every quarter. Approval processes that require detailed scope before exploration. Success metrics tied to delivery dates rather than user outcomes. These aren't problems that better UX practice can solve. They're symptoms of how we structure digital product work.

The desire for a "quick MVP" usually signals something deeper: we don't have the organisational conditions that allow teams to learn their way to better outcomes. We don't have:

  • Permission to spend time understanding before building
  • Funding models that account for discovery and iteration
  • Leadership that values validated learning over visible progress
  • Metrics that connect user outcomes to business results

When I gave that presentation, I suggested teams "be clear what your MVP is for" and "prioritise for learning about user outcomes." That's not wrong, exactly. But it assumes teams have the agency to make those choices.

From what I've seen, most don't.

What Changed My Thinking

Over the past years, I've watched something interesting emerge as we experimented with our XD pod program. Unlike practitioners who get pulled reactively into different projects that need features built, these pods are designed to have explicit working agreements with their domain leaders. And that changes everything.

The domain leader who sponsors a pod provides cover for the team to do continuous discovery. Not as a luxury, but as part of how the work gets done. Instead of constantly justifying why they need time for research, the pod can focus on understanding the problem space deeply and planning the roadmap holistically with their stakeholders over time.

I've seen one pod spend weeks just understanding workflow patterns and employee sentiments across different user groups before proposing any solutions. Another pod closely working with their product managers to bring stakeholders into user conversations, building shared understanding rather than handing over research reports. These aren't special projects with unusually enlightened stakeholders. The difference is structural: the working agreement creates space for learning to happen.

What strikes me is how this shifts the dynamic. When an XD practitioner is embedded in a product team with quarterly delivery targets, every day spent on discovery feels like a delay. When that same practitioner is in a pod with a mandate to understand and improve a domain over time, discovery becomes the foundation everything else builds on.

The leadership commitment isn't just philosophical, but it’s organisational support & protection. It's the domain leader saying to other stakeholders: "We're investing in understanding this properly because the cost of getting it wrong is higher than the cost of taking time to learn."

The Questions I'm Sitting With

I don't have a tidy framework for this. But here's what I find myself asking when I see XD practitioners struggling to make space for research:

What does your funding model reward?

If teams only get budget by promising features, they'll optimise for feature delivery. If they get budget by demonstrating learning, they'll optimise for learning. The incentives shape the behaviour.

Who defines the success criteria?

When "success" means shipping on a specific date, every research activity becomes a trade-off. When "success" means understanding well enough to make confident decisions, research becomes essential infrastructure. But individual practitioners can't change those definitions alone—it requires leadership alignment.

What organisational structure enables sustained focus?

I've noticed our most effective XD work happens when practitioners aren't being pulled across multiple urgent requests. The pod model creates boundaries that protect the team's ability to build deep domain knowledge. But that only works when leaders actively maintain those boundaries.

How do we make learning visible and valuable?

The pods that gain traction are the ones that help their stakeholders see what they're learning in ways that inform decisions. Rather than just producing research reports, they focus on building shared understanding that changes what the team chooses to build. But that requires both strong facilitation from XD practitioners and genuine curiosity from stakeholders.

I don't think these are questions individual practitioners can answer alone. They require leadership to examine how we fund, staff, and measure our work. But naming them feels important. Because if we keep treating these as UX practice problems rather than organisational design problems, we'll keep getting the same results.

What This Means for Practice

Watching this unfold, I'm increasingly convinced that the "UX in agile" problem is actually an organisational design problem. The practitioners I see thriving aren't necessarily the most skilled designers—they're the ones working in contexts that allow them to practice well.

That's both sobering and hopeful. Sobering because individual craft and advocacy only go so far when the structure works against you. Hopeful because it suggests we can actually engineer better conditions for this work.

The pod model isn't perfect, and it's not the only answer. But it demonstrates something important: when leadership explicitly creates space for continuous learning and provides organisational cover for it, XD practitioners can focus on building understanding rather than constantly fighting for permission to understand.

The question isn't whether teams should do discovery. The question is whether we're willing to structure work in ways that make discovery possible.


This post grew out of a presentation I gave at our 2021 Agile Day, but the thinking has evolved considerably since then. I'm still working through what it means to create organisational conditions that support learning-oriented practice. If you're wrestling with similar challenges, I'd be curious to hear what you're observing.