Every non-technical founder has been there: Your developer is explaining why something is going to take three weeks instead of three days, using terms that sound like a foreign language. You nod along, wondering if you’re being taken for a ride or if this is really necessary.
In this guide, I’m going to try and cut through the developer-speak to give you what you really need: practical tools to help you make informed decisions about your product’s quality, without needing to become a developer yourself.
Quality Basics in Plain Language
I’m a big fan of philosophy. One of my favourite books of all time is Zen & the Art of Motorcycle Maintenance, in which the author Robert M. Pirsig attempts to define what quality actually is:
Something you know when you see it, even if you can’t fully explain it.
Those of us dealing with the daily realities of trying to ship a product (like founders, for example) need to get a little more concrete about what our definitions of quality are. You need to actually be able to explain it.
A dear friend (Jerry Weinberg) once told me that “quality is value to some person”, so when thinking about how to define what quality is for a given scenario, that’s usually where I’ll start; value.
Quality is value to some person.
In the context of a product, or a service, I’ll be thinking about what value is being delivered, and how, and what risks may threaten the delivery of value to the person who cares about it.
Threats to value (risks) are going to differ depending on the context in which you’re working. A free to consumers web app is going to have a different risk profile to something that requires payments to be made, and will also have different risks to financial or medical software. Risk profiles may also differ depending on the the technologies used to build the product (software in the examples used) versus how the product is delivered (via the web, or in an app store, or using some other mechanism) versus where it’s ultimately hosted (in the cloud, on the web, on a PC or mobile device).
What I’m getting at here is that it can be quite tricky to pin down what quality actually is for your specific context, since it’s liable to change depending on a number of variables:
- Who your audience is
- The type of product you’re offering to them, and the business model you’re using to do so
- How the product is intended to be used, and under what conditions
- How and what the product is made from, and delivered
- Regulatory or legal considerations
- How mature the product is and under what constraints it’s being created
If you find yourself considering what quality means for your specific product, you’ll likely find it helpful to think in terms of qualities; traits you want your product to have, to make it more appealing to your customers, and ensure they get the value you’re seeking to deliver to them. Qualities such as the ones in the list below:
- Usability — the provision of a pleasurable user experience
- Reliability — the ability to use the product without faults or errors
- Scalability — the product can be used at the speed customers may reasonably expect
- Security — the product can be used in a way which protects customers (and the business owner) from bad actors who may wish to steal time, data or money
- Maintainability, deployability, installability [sic], testability, visibility — technical qualities relating to how easy (or not) the product is to develop, maintain, host, monitor and repair
Believe it or not, there’s an entire industry devoted to the craft of identifying threats to value, and having worked hands-on in that industry (software testing services & products) for the best part of two decades, I’ve supported, consulted and delivered countless quality definitions for all kinds of products, and they’ve always been different depending on the context and the constraints, much as I’ve described above.
Over the course of time though, I developed some starter-packs for these discussions; checklists which could be applied to jump-start the process for any product or industry, so we didn’t have to start from scratch each time.
Simple templates for any product
Generally speaking, I’d prefer to have a conversation around what quality looks like (what’s valuable to the people who care), but that’s not always possible. When it is, I’ll frame the discussion using sticky notes, or my personal preference, a mind map. When it’s not, carrying out an assessment using a checklist like the one below can be super helpful.
Core Quality Assessment Templates
1. Quick Quality Scan (10-Minute Assessment)
User Impact Score – Score each 1-5 (1=Poor, 5=Excellent):
[ ] Core function reliability: ___
[ ] Error frequency: ___
[ ] Performance speed: ___
[ ] Data accuracy: ___
[ ] Interface clarity: ___
Total: ___ /25 (Action needed if below 20)
Critical Risk Check – Flag any that apply:
[ ] Security vulnerabilities
[ ] Data loss potential
[ ] Payment issues
[ ] Legal/compliance gaps
[ ] Major user complaints
Any flags require immediate attention.
2. Feature Readiness Checklist
Minimum Quality Gates – Must pass all:
[ ] Core function works consistently
[ ] Basic error handling exists
[ ] Performance meets minimum threshold
[ ] No data corruption risks
[ ] Basic security in place
[ ] Accessible to target users
[ ] Can be supported by team
Nice-to-Have Quality Aspects – Track progress:
[ ] Comprehensive error handling
[ ] Performance optimization
[ ] Enhanced security features
[ ] Polished user interface
[ ] Extended platform support
[ ] Advanced monitoring
[ ] Complete documentation
3. Quality Debt Tracker
Current Issues – For each issue:
Severity (High/Medium/Low): ___
User Impact (1-5): ___
Fix Complexity (1-5): ___
Business Cost (1-5): ___
Priority Score = (User Impact × Business Cost) / Fix Complexity
Technical Debt Categories – Track by area:
[ ] Code quality issues
[ ] Testing gaps
[ ] Security concerns
[ ] Performance problems
[ ] UX inconsistencies
[ ] Documentation gaps
[ ] Infrastructure needs
4. Release Quality Verification
Pre-Release Checklist – Must complete:
[ ] Core features tested
[ ] Regression testing done
[ ] Performance verified
[ ] Security checked
[ ] Data integrity confirmed
[ ] Support team briefed
[ ] Rollback plan ready
Post-Release Monitoring – First 24 hours:
[ ] Error rate normal
[ ] Performance stable
[ ] User feedback positive
[ ] Support load manageable
[ ] Systems stable
[ ] Data flowing correctly
5. Quality Metrics Dashboard
Key Metrics to Track – Weekly tracking:
Error rate: ___
Performance score: ___
User satisfaction: ___
Support tickets: ___
Technical debt score: ___
Security status: ___
Warning Thresholds – React if:
Error rate > 1%
Performance drop > 20%
Satisfaction < 4/5
Tickets up > 50%
Debt score > 7/10
You can use the checklist to assess, and subsequently track the quality of your product at different stages of your development and release cycles, and do those things at different levels of detail. Here’s how:
- Start with the Quick Quality Scan. Use it daily/weekly for a rapid diagnosis of your current level of quality. It’s great for non-technical status checks, and It helps spot issues early.
- Use the Feature Readiness Checklist when you’re preparing new releases, evaluating feature completeness and/or planning technical work.
- Apply the Quality Debt Tracker to prioritise improvements, make technical debt visible & justify quality investments.
- The Release Quality Verification helps to ensure systematic pre-release checks, structured post-release monitoring and provide clear go/no-go criteria for decision making.
- Finally, the Quality Metrics Dashboard helps track trends over time, identification of metrics in decline and communication of status to stakeholders.
A key stakeholder group you’ll need to communicate with is your development team. Talking with developers about technical issues can be scary for anyone. Let’s take a look at some ways you can speak with them about the things that are important to you, without sounding like you don’t actually know what you’re talking about.
How to sound smart when talking to developers
Honestly, trying to sound smart is entirely the wrong play here. I’ve spent much of my career talking with developers, and even when I do know what I’m talking about and could genuinely add value to a discussion, I prefer to lead the conversation via probing questions instead. Any time I can motivate a developer to go back and revisit their assumptions, or what they otherwise think they know about a problem, by asking an insightful question about it, I’ll take that as a win.
So, how do you go about identifying those questions in the first place, for a given release or feature?
Well, I think there’s a few heuristics (or rules of thumb) you can call on when having these kinds of conversations. Let’s explore some of them, keeping in mind the caveats of a) some of these questions will be more or less appropriate depending on the specifics of the discussion & b) they’re by no means exhaustive; the suggestions below are intended as a kind of jumping-off point for you to be hopefully inspired by, and to generate your own questions and ideas from.
- The User Journey Heuristic
- “Can you walk me through how you think a typical user would experience this…”
- “What happens if the user does X instead of Y?”
- “Where might our users get confused or stuck?”
- “How might this feature work for someone who’s never used our product before?”
- The Edge Case Explorer
- “What’s the worst thing that could happen here?”
- “How does this feature behave when the network is slow/down?”
- “What happens if we get 100x more users than expected?”
- “How will we handle failures or degradations?”
- The Simplicity Seeker
- “Could we solve this problem in a simpler way?”
- “What assumptions are we making that might not be true?”
- “If we had to ship this tomorrow, what’s the minimal version that would work?”
- “Which parts of this could we remove without losing core value?”
- The Future Proofer
- “How hard would it be to change this in the future?”
- “What might make us regret this decision in six months or a couple of years down the road?”
- “How will this decision affect our ability to scale the product or business in the future?”
- “Are we working ourselves into a corner by making this decision?”
- The Quality Investigator
- “How will we know if this feature is working correctly?”
- “What kinds of errors should we expect?”
- “How can we make problems visible when they occur?”
- “What kinds of monitoring or alerts do we need to support this feature?”
- The Integration Detective
- “How does this interact with our existing systems?”
- “What other parts of the system might be affected?”
- “Are there any dependencies we need to consider?”
- “What happens if service X goes down?”
- The Risk Revealer
- “What will keep you up at night about this design or solution?”
- “Where do you think we may run into problems?”
- “What trade-offs are we making and have we considered alternatives?”
- “What technical debt might we be creating with this approach?”
- The Context Seeker
- “Help me understand why you chose this approach…”
- “What other solutions did you consider?”
- “What constraints are we working with?”
- “What similar problems have you solved before?”
If you’re looking at the list above and thinking “there seems to be a lack of data in the questions and answers…” — I agree! You should absolutely be seeking and encouraging your developers to seek out or otherwise utilise data to inform the decision making process when devising technical solutions. Here’s some more [bonus!] heuristics, which can be used to steer the discussion along exactly those lines:
- The Data Demander
- “What metrics will tell us if this feature has been successful?”
- “How are we measuring X right now?”
- “What’s our baseline for comparison?”
- “What metrics can we use to test this approach?”
- The Usage Pattern Explorer
- “What does our usage data tell us about this problem?”
- “Which users are most affected by this issue?”
- “Do we have data on how often this scenario occurs?”
- “What patterns do we see in our logs?”
- The Performance Profiler
- “Do we have metrics for the current response times?”
- “What does the resource (memory/CPU/IO) utilisation profile look like?”
- “How many database queries does this generate?”
- The Impact Quantifier
- “How many users/transactions does this affect?”
- “What’s the business cost of this issue?”
- “Can we estimate the <business function(s)> hours saved/spent?”
- “What is the performance improvement in real numbers?”
- The Trend Tracker
- “How has this metric changed over time?”
- “What growth rate do we need to support?”
- “Are there seasonal or other historic patterns we should consider?”
- “What does our historic data tell us about future needs?”
- The Resource Reality-Checker
- “What’s the current load on our systems?”
- “How much headroom do we have?”
- “What does our capacity planning telling us?”
- “Do we have data on resource utilisation?”
What you may also have observed as you were scanning the heuristics above is that they frame the (hypothetical) conversation through different lenses; presenting different ways of looking at it, which lend themselves to different personas who may be involved in the product development process:
- The Product Manager persona, taking the “how do we get maximum value from this feature?” perspective.
- The UX Designer persona, taking the “how will users interact with this feature?” perspective.
- The Tester persona, taking a “how will we test this feature works correctly?” perspective.
- The Business Owner or Sponsor, taking a “how can we minimise costs and maximise profits?” perspective.
- The DevOps Engineer persona, taking the “what’s the impact on our infrastructure?” perspective.
Finishing up then, and to re-state my earlier point, the questions above aren’t intended to be used as a checklist for conversations with your engineers. I doubt they’d appreciate that! The heuristics and example questions can and should be used as prompts for conversations you may wish to have, if you’re not fully confident in your ability to have a technical discussion with your engineer(s), but want to make sure you’ve covered the necessary bases before they go full steam-ahead with actually building something, taking up valuable time and resources in the process.
The questions will help you to ground your discussions in data, rather than assumptions; identify metrics or measurements you may be missing & create measurable success criteria; help make tradeoffs more explicit; and generally move you in the direction of a more evidence based development culture.
Let’s cut to the chase though.
What quality issues actually matter?
Or, to put it another way: What quality issues do you really need to concern yourself with, as a non-technical founder?
Taking into account limitations on time and resources, to a greater or lesser degree, you’re going to want to hone in on the most important things. In my experience, those are issues which directly impact your ability to:
- Deliver value to your users or customers,
- Keep your business running smoothly, and
- Scale without breaking.
Putting in place measurements and leading indicators of revenue-threatening issues which may prevent users from paying you, features that have a direct bearing on user retention or that drive away potential customers, or performance issues affecting core functionality, will go a long way towards keeping your business on-track.
Additionally, you need to maintain a trust relationship with your customers. That means staying on top of data security & integrity, uptime, and SLA’s relating to other core (and perhaps somewhat mundane) and customer facing or impacting activities like payment processing.
Finally, and keeping the longer term view in mind, you need to keep tabs on growth-limiting and cost-multiplying factors: scalability bottlenecks, technical debt, infrastructure limitations, bugs or performance issues that generate excessive demands on your support operations, or take disproportionate development time and effort to fix.
Everything else? It’s probably negotiable. In the words of Voltaire, perfect is the enemy of good, and as a founder, your job isn’t to build a perfect product -– it’s to build something valuable enough that customers will pay for it, and reliable enough that they’ll keep using it.
Quality should enable growth, not hinder it.
Use the templates and conversation heuristics I’ve presented above to stay focused on these critical quality issues, and don’t let perfect be the enemy of good enough. Your goal as a founder is to identify the sweet spot where quality supports growth rather than hinders it, and stay laser focused on your business objectives.
If I can help with that, feel free to reach out.