Episode Summary
Prototyping does not stop once something exists. In this second part of their prototyping series, Justin Davis and Greg Ross-Munro explore how service firms should gather feedback, interpret it correctly, and evolve prototypes toward real products without overbuilding too early. They share practical guidance on finding the right people to test with, running lightweight feedback sessions, and focusing on mental models and problem understanding rather than cosmetic changes.
The episode also digs into how fidelity should increase as certainty increases, why creative destruction is a healthy part of the process, and how AI tools are reshaping the boundary between prototype and production. Justin and Greg close with a simple, repeatable prototyping playbook that helps teams decide what to build themselves, when to involve experts, and when it is smarter to kill an idea early rather than scale the wrong thing.
Episode notes
- Why feedback is the most important part of prototyping
- How casual user testing often beats formal research labs
- Matching feedback participants to your real target users
- Why most prototypes fail due to mismatched mental models
- How to avoid “faster horse” feedback
- Asking better questions: what is wrong and what is missing
- Increasing fidelity only as certainty increases
- Creative destruction as a normal part of product development
- How AI tools blur the line between prototype and product
- The new risks around security, infrastructure, and scalability
- When teams should prototype on their own vs. hire experts
- A simple end-to-end prototyping playbook for service firms
- Knowing when to kill a prototype and move on
Episode Transcript
Prototyping With Feedback: When to Test, What to Ask, and When to Build for Real
Justin Davis:
Welcome in to Leap to Scale. This is the podcast that helps service companies take their business and turn them into more than a service company. More than just trading dollars for hours using technology to turn your services into products that can scale without adding headcount, and also using AI and automation to make your internal processes more efficient so you can lower your costs and increase your margins.
My name is Justin Davis. I’m the Vice President of User Experience over at Sourcetoad, and I’m here joined by my friend Greg. Greg, good to see you.
Greg Ross-Munro:
Nice to see you too, Justin. Last time we spoke, we talked about prototyping and how to be quick about building prototypes, how to pick the right level of fidelity, and even using spreadsheets as phase one of a prototype. We also talked about how AI is changing the prototyping conversation.
One thing we touched on that I think we should dive deeper into is feedback. How do you get feedback on a prototype, and why is it important? As a UX guy, you’ve done a lot of user testing and user interviews, right?
Justin Davis:
Yeah. Definitely in the high hundreds of sessions, if not more. Each one of those was usually a 60-minute conversation. Hundreds of hours sitting and watching people use products.
Greg Ross-Munro:
Way more than I have. I’ve mostly done hallway testing where you just grab somebody and say, “Hey, can you look at this?”
I did one interesting session in the travel tech world. I was in an airport in France and had designed a TV system and remote control for a travel company. I happened to see a woman with a bag tag for that company, so I pulled up a chair next to her and said:
“Hey, this is going to sound weird, but I build software for this travel company. Would you mind testing this for me?”
She was probably in her 70s. We talked for a while, and then I handed her the remote and just watched her use it without saying a word.
I learned more in those 15 minutes at Charles de Gaulle airport than I could have learned by iterating on it hundreds of times in my office.
So how do you approach feedback testing, and who should you do it with?
Justin Davis:
I tell people all the time not to overcomplicate this. People think they need an eye-tracking lab with a one-way mirror and observers behind glass.
But honestly, do exactly what you did. Go sit in a coffee shop and say, “I’ll buy you a coffee if you’ll talk with me for 20 minutes about what I’m building.”
That’s enough.
The reason feedback matters is because prototyping is fundamentally about testing assumptions. You have an idea about a problem and how it might be solved. What you’re trying to find out is whether you’re right.
And most of the time, you’re wrong. Hopefully you’re wrong in the right direction.
So the goal is to get your idea in front of people as quickly as possible so you can learn.
That’s why I’m a huge fan of telling everybody what you’re building. Nobody is going to steal your idea. Talk about it.
When you’re getting feedback, the first thing is making sure you’re talking to the right people. You want someone who matches your target audience. Not physically, but someone who thinks and behaves like your future user.
Your airport story is a perfect example. You identified somebody who fit the profile of your target customer and got direct feedback from them.
Sometimes that’s an internal stakeholder if you’re improving internal processes. Other times it’s an external customer if you’re productizing a service.
One of the best things you can do is take a new productized version of your service back to existing customers and ask them what they think.
Greg Ross-Munro:
I have a question then.
There’s that famous Henry Ford quote: “If I had asked people what they wanted, they would have said a faster horse.”
Whether he actually said it or not, the point is valid.
When you’re showing someone a prototype, they can’t see the full vision in your head. Especially when you’re working iteratively.
So how do you avoid getting “faster horse” feedback? How do you get the kind of feedback you actually need?
Justin Davis:
There are two guidelines I’d give.
First: stay focused on the hole, not the drill.
Focus on the outcome, not the tool being used to get there.
This is hard because most of us are problem solvers, and problem solvers latch onto solutions. But what matters is understanding the actual pain point, not becoming emotionally attached to a particular implementation.
When you’re talking to users, you’re not necessarily trying to get them to say, “Yes, this solves my problem.”
What you’re really trying to do is understand how they think about the problem so you can verify whether your mental model matches theirs.
Greg Ross-Munro:
So user feedback shouldn’t just be, “Move this button over here.”
You’re trying to understand how they think, how they work, and how they see their own job or process so you can reflect that back into the product.
Justin Davis:
Exactly.
Most products don’t fail because the button was ugly or in the wrong place.
They fail because the creator’s mental model of the problem doesn’t match the user’s mental model.
At the foundation, they see the world differently.
I read a blog post today called Your Data Model Is Your Destiny. It talked about how the way you organize and think about concepts defines the product itself.
That same principle applies here.
You’re not validating fonts and colors early on. You’re validating whether your understanding of the user’s world is actually correct.
And theoretically, if everyone had perfect context, they’d probably converge on similar solutions anyway.
So the whole point of feedback is making sure you and your users share the same understanding of the problem.
Justin Davis:
The second guideline is that feedback should happen in rounds of increasing fidelity.
I say all the time that the fidelity of your prototype should match the certainty of your thinking.
If you’re still exploring an idea, sketch it on a napkin. Don’t worry about fonts and colors yet.
As you gain confidence, you layer in more detail.
An artist doesn’t start with tiny details. They build up layers over time.
And every layer deserves its own round of feedback.
At first you’re validating concepts. Much later you might debate whether a button should be green or blue.
But that’s round ten, not round one.
Greg Ross-Munro:
When you talk about increasing fidelity, does that mean you’re building multiple versions of the prototype?
What exactly do you mean by fidelity?
Justin Davis:
Fidelity is basically the level of “finished-ness.”
Honestly, the very first fidelity is just a conversation.
“Hey, I have this idea. What do you think?”
That’s a mental sketch.
If someone immediately says, “That would never work because of X,” then great. You just saved yourself hours of work.
So it starts with conversation. Then maybe a sketch. Then a wireframe in Figma. Then maybe an automation in Make.com. Then eventually a real web app.
At every stage you’re stepping up in fidelity.
And often, each stage involves creative destruction. You burn the old version down and rebuild based on what you’ve learned.
You don’t literally turn a napkin sketch into production code. You rebuild the next version informed by the previous one.
The real value of each iteration isn’t the artifact itself. It’s the certainty you’ve gained.
Greg Ross-Munro:
That leads to another question.
At what point does a prototype become a product?
With software, that line gets blurry.
Justin Davis:
I actually think the rules are changing.
Traditionally, we thought of prototypes as disposable. But with AI tools like Replit, Lovable, and vibe coding platforms, prototypes are increasingly evolving directly into products.
So maybe prototype versus product isn’t a hard boundary anymore. Maybe it’s just part of a lifecycle.
Some things completely rebuild themselves. Other things evolve gradually.
The real question becomes: what has to happen for this thing to survive in the real world?
Historically, a prototype might be used by one or two people, while a product is used by thousands.
Now those lines are blending.
That said, there still tends to be a leap between “cool prototype” and “real production system.”
And today, that leap is less about coding and more about infrastructure:
- Is it secure?
- Is it maintainable?
- Can it scale?
- Where does the data live?
- How is it deployed?
Those are the things that still separate prototypes from production systems.
Greg Ross-Munro:
I met with a CPA firm recently that wants to build a reporting portal for clients.
Honestly, they could probably prototype a lot of it themselves. They could sketch out screens, build fake data views, and test concepts with customers.
But eventually, because they’re handling financial information, they need something bulletproof.
So where do you draw the line between doing it yourself versus bringing in professionals?
Justin Davis:
If you’re building small internal tools that aren’t sensitive or customer-facing, experimentation is great.
Build things. Test ideas. Learn.
But once you’re touching customer data, security, or mission-critical systems, you probably want experienced people involved earlier.
Not because prototyping is bad, but because you don’t want to accidentally build yourself into a corner.
You don’t want to wake up six months later realizing:
“Uh oh, this prototype became critical infrastructure and now migration is going to take months.”
Having someone alongside you who understands the path from prototype to production can help avoid that.
Greg Ross-Munro:
I think we’ve covered most of it.
The big takeaway for me is:
Don’t overthink feedback. You’re trying to figure out whether you’re moving in the right direction, not whether the button should move three pixels to the left.
And prototypes aren’t necessarily throwaway artifacts anymore. They’re increasingly part of the actual product lifecycle.
If you’re building something complex or sensitive, bring in experienced people early. But otherwise, you can probably do more yourself than you think.
So to close us out, if you had to give a quick playbook for prototyping, what would it be?
Justin Davis:
Start with one thing you want to prove.
One assumption. One hypothesis.
“I bet we can increase this.”
“I bet we can reduce that.”
“I bet we can make money this way.”
Then choose the lightest possible way to test it.
Start with a conversation. Then a sketch. Then a spreadsheet. Then an app.
Don’t jump to full fidelity too quickly.
Test early and often. Share your ideas with people. Talk about them. Sketch them out.
At the beginning, even two or three people giving feedback is enough.
Measure real outcomes. Make sure the thing is actually improving what you intended to improve.
And then decide:
- Keep it
- Kill it
- Reinvent it
- Or turn it into something bigger
Greg Ross-Munro:
It hurts to kill projects, but sometimes the best money you’ll ever spend is learning that something shouldn’t exist.
That’s much cheaper than building something nobody uses.
Justin Davis:
Exactly.
Sometimes you just have to drill a bunch of small holes looking for oil before you drill the big well.
We’ve been doing this forever.
Greg Ross-Munro:
Well, thank you, Justin. It’s Friday again.
Justin Davis:
It absolutely is. And I know you’re about to go get beat up, so enjoy that.
Greg Ross-Munro:
Judo. Good times.
Thanks again for talking prototyping, and thanks everybody for listening. We’ll see you next time.
Justin Davis:
All right. We’ll see you guys.
Greg Ross-Munro:
Bye.
