Category: Product Management

  • Product Quality for Non-Technical Founders

    Product Quality for Non-Technical Founders

    Every non-technical founder has been there: Your developer is explaining why something is going to take three weeks instead of three days, using terms that sound like a foreign language. You nod along, wondering if you’re being taken for a ride or if this is really necessary.

    In this guide, I’m going to try and cut through the developer-speak to give you what you really need: practical tools to help you make informed decisions about your product’s quality, without needing to become a developer yourself.

    Quality Basics in Plain Language

    I’m a big fan of philosophy. One of my favourite books of all time is Zen & the Art of Motorcycle Maintenance, in which the author Robert M. Pirsig attempts to define what quality actually is: 

    Something you know when you see it, even if you can’t fully explain it. 

    Those of us dealing with the daily realities of trying to ship a product (like founders, for example) need to get a little more concrete about what our definitions of quality are. You need to actually be able to explain it. 

    A dear friend (Jerry Weinberg) once told me that “quality is value to some person”, so when thinking about how to define what quality is for a given scenario, that’s usually where I’ll start; value.

    Quality is value to some person.

    In the context of a product, or a service, I’ll be thinking about what value is being delivered, and how, and what risks may threaten the delivery of value to the person who cares about it. 

    Threats to value (risks) are going to differ depending on the context in which you’re working. A free to consumers web app is going to have a different risk profile to something that requires payments to be made, and will also have different risks to financial or medical software. Risk profiles may also differ depending on the the technologies used to build the product (software in the examples used) versus how the product is delivered (via the web, or in an app store, or using some other mechanism) versus where it’s ultimately hosted (in the cloud, on the web, on a PC or mobile device). 

    What I’m getting at here is that it can be quite tricky to pin down what quality actually is for your specific context, since it’s liable to change depending on a number of variables:

    • Who your audience is
    • The type of product you’re offering to them, and the business model you’re using to do so
    • How the product is intended to be used, and under what conditions
    • How and what the product is made from, and delivered
    • Regulatory or legal considerations
    • How mature the product is and under what constraints it’s being created

    If you find yourself considering what quality means for your specific product, you’ll likely find it helpful to think in terms of qualities; traits you want your product to have, to make it more appealing to your customers, and ensure they get the value you’re seeking to deliver to them. Qualities such as the ones in the list below:

    • Usability — the provision of a pleasurable user experience
    • Reliability — the ability to use the product without faults or errors
    • Scalability — the product can be used at the speed customers may reasonably expect
    • Security — the product can be used in a way which protects customers (and the business owner) from bad actors who may wish to steal time, data or money
    • Maintainability, deployability, installability [sic], testability, visibility — technical qualities relating to how easy (or not) the product is to develop, maintain, host, monitor and repair

    Believe it or not, there’s an entire industry devoted to the craft of identifying threats to value, and having worked hands-on in that industry (software testing services & products) for the best part of two decades, I’ve supported, consulted and delivered countless quality definitions for all kinds of products, and they’ve always been different depending on the context and the constraints, much as I’ve described above.

    Over the course of time though, I developed some starter-packs for these discussions; checklists which could be applied to jump-start the process for any product or industry, so we didn’t have to start from scratch each time. 

    Simple templates for any product

    Generally speaking, I’d prefer to have a conversation around what quality looks like (what’s valuable to the people who care), but that’s not always possible. When it is, I’ll frame the discussion using sticky notes, or my personal preference, a mind map. When it’s not, carrying out an assessment using a checklist like the one below can be super helpful.

    Core Quality Assessment Templates

    1. Quick Quality Scan (10-Minute Assessment)

    User Impact Score – Score each 1-5 (1=Poor, 5=Excellent):

    [ ] Core function reliability: ___

    [ ] Error frequency: ___

    [ ] Performance speed: ___

    [ ] Data accuracy: ___

    [ ] Interface clarity: ___

    Total: ___ /25 (Action needed if below 20)

    Critical Risk Check – Flag any that apply:

    [ ] Security vulnerabilities

    [ ] Data loss potential

    [ ] Payment issues

    [ ] Legal/compliance gaps

    [ ] Major user complaints

    Any flags require immediate attention.

    2. Feature Readiness Checklist

    Minimum Quality Gates – Must pass all:

    [ ] Core function works consistently

    [ ] Basic error handling exists

    [ ] Performance meets minimum threshold

    [ ] No data corruption risks

    [ ] Basic security in place

    [ ] Accessible to target users

    [ ] Can be supported by team

    Nice-to-Have Quality Aspects – Track progress:

    [ ] Comprehensive error handling

    [ ] Performance optimization

    [ ] Enhanced security features

    [ ] Polished user interface

    [ ] Extended platform support

    [ ] Advanced monitoring

    [ ] Complete documentation

    3. Quality Debt Tracker

    Current Issues – For each issue:

    Severity (High/Medium/Low): ___

    User Impact (1-5): ___

    Fix Complexity (1-5): ___

    Business Cost (1-5): ___

    Priority Score = (User Impact × Business Cost) / Fix Complexity

    Technical Debt Categories – Track by area:

    [ ] Code quality issues

    [ ] Testing gaps

    [ ] Security concerns

    [ ] Performance problems

    [ ] UX inconsistencies

    [ ] Documentation gaps

    [ ] Infrastructure needs

    4. Release Quality Verification

    Pre-Release Checklist – Must complete:

    [ ] Core features tested

    [ ] Regression testing done

    [ ] Performance verified

    [ ] Security checked

    [ ] Data integrity confirmed

    [ ] Support team briefed

    [ ] Rollback plan ready

    Post-Release Monitoring – First 24 hours:

    [ ] Error rate normal

    [ ] Performance stable

    [ ] User feedback positive

    [ ] Support load manageable

    [ ] Systems stable

    [ ] Data flowing correctly

    5. Quality Metrics Dashboard

    Key Metrics to Track – Weekly tracking:

    Error rate: ___

    Performance score: ___

    User satisfaction: ___

    Support tickets: ___

    Technical debt score: ___

    Security status: ___

    Warning Thresholds – React if:

    Error rate > 1%

    Performance drop > 20%

    Satisfaction < 4/5

    Tickets up > 50%

    Debt score > 7/10

    You can use the checklist to assess, and subsequently track the quality of your product at different stages of your development and release cycles, and do those things at different levels of detail. Here’s how:

    1. Start with the Quick Quality Scan. Use it daily/weekly for a rapid diagnosis of your current level of quality. It’s great for non-technical status checks, and It helps spot issues early.
    2. Use the Feature Readiness Checklist when you’re preparing new releases, evaluating feature completeness and/or planning technical work.
    3. Apply the Quality Debt Tracker to prioritise improvements, make technical debt visible & justify quality investments.
    4. The Release Quality Verification helps to ensure systematic pre-release checks, structured post-release monitoring and provide clear go/no-go criteria for decision making.
    5. Finally, the Quality Metrics Dashboard helps track trends over time, identification of metrics in decline and communication of status to stakeholders.

    A key stakeholder group you’ll need to communicate with is your development team. Talking with developers about technical issues can be scary for anyone. Let’s take a look at some ways you can speak with them about the things that are important to you, without sounding like you don’t actually know what you’re talking about.

    How to sound smart when talking to developers

    Honestly, trying to sound smart is entirely the wrong play here. I’ve spent much of my career talking with developers, and even when I do know what I’m talking about and could genuinely add value to a discussion, I prefer to lead the conversation via probing questions instead. Any time I can motivate a developer to go back and revisit their assumptions, or what they otherwise think they know about a problem, by asking an insightful question about it, I’ll take that as a win. 

    So, how do you go about identifying those questions in the first place, for a given release or feature? 

    Well, I think there’s a few heuristics (or rules of thumb) you can call on when having these kinds of conversations. Let’s explore some of them, keeping in mind the caveats of a) some of these questions will be more or less appropriate depending on the specifics of the discussion & b) they’re by no means exhaustive; the suggestions below are intended as a kind of jumping-off point for you to be hopefully inspired by, and to generate your own questions and ideas from. 

    1. The User Journey Heuristic
    • “Can you walk me through how you think a typical user would experience this…”
    • “What happens if the user does X instead of Y?”
    • “Where might our users get confused or stuck?”
    • “How might this feature work for someone who’s never used our product before?”
    1. The Edge Case Explorer
    • “What’s the worst thing that could happen here?”
    • “How does this feature behave when the network is slow/down?”
    • “What happens if we get 100x more users than expected?”
    • “How will we handle failures or degradations?”
    1. The Simplicity Seeker
    • “Could we solve this problem in a simpler way?”
    • “What assumptions are we making that might not be true?”
    • “If we had to ship this tomorrow, what’s the minimal version that would work?”
    • “Which parts of this could we remove without losing core value?”
    1. The Future Proofer
    • “How hard would it be to change this in the future?”
    • “What might make us regret this decision in six months or a couple of years down the road?”
    • “How will this decision affect our ability to scale the product or business in the future?”
    • “Are we working ourselves into a corner by making this decision?”
    1. The Quality Investigator
    • “How will we know if this feature is working correctly?”
    • “What kinds of errors should we expect?”
    • “How can we make problems visible when they occur?”
    • “What kinds of monitoring or alerts do we need to support this feature?”
    1. The Integration Detective
    • “How does this interact with our existing systems?”
    • “What other parts of the system might be affected?”
    • “Are there any dependencies we need to consider?”
    • “What happens if service X goes down?”
    1. The Risk Revealer
    • “What will keep you up at night about this design or solution?”
    • “Where do you think we may run into problems?”
    • “What trade-offs are we making and have we considered alternatives?”
    • “What technical debt might we be creating with this approach?”
    1. The Context Seeker
    • “Help me understand why you chose this approach…”
    • “What other solutions did you consider?”
    • “What constraints are we working with?”
    • “What similar problems have you solved before?”

    If you’re looking at the list above and thinking “there seems to be a lack of data in the questions and answers…” — I agree! You should absolutely be seeking and encouraging your developers to seek out or otherwise utilise data to inform the decision making process when devising technical solutions. Here’s some more [bonus!] heuristics, which can be used to steer the discussion along exactly those lines:

    1. The Data Demander
    • “What metrics will tell us if this feature has been successful?”
    • “How are we measuring X right now?”
    • “What’s our baseline for comparison?”
    • “What metrics can we use to test this approach?”
    1. The Usage Pattern Explorer
    • “What does our usage data tell us about this problem?”
    • “Which users are most affected by this issue?”
    • “Do we have data on how often this scenario occurs?”
    • “What patterns do we see in our logs?”
    1. The Performance Profiler
    • “Do we have metrics for the current response times?”
    • “What does the resource (memory/CPU/IO) utilisation profile look like?”
    • “How many database queries does this generate?”
    1. The Impact Quantifier
    • “How many users/transactions does this affect?”
    • “What’s the business cost of this issue?”
    • “Can we estimate the <business function(s)> hours saved/spent?”
    • “What is the performance improvement in real numbers?”
    1. The Trend Tracker
    • “How has this metric changed over time?”
    • “What growth rate do we need to support?”
    • “Are there seasonal or other historic patterns we should consider?”
    • “What does our historic data tell us about future needs?”
    1. The Resource Reality-Checker
    • “What’s the current load on our systems?”
    • “How much headroom do we have?”
    • “What does our capacity planning telling us?”
    • “Do we have data on resource utilisation?”

    What you may also have observed as you were scanning the heuristics above is that they frame the (hypothetical) conversation through different lenses; presenting different ways of looking at it, which lend themselves to different personas who may be involved in the product development process:

    • The Product Manager persona, taking the “how do we get maximum value from this feature?” perspective.
    • The UX Designer persona, taking the “how will users interact with this feature?” perspective.
    • The Tester persona, taking a “how will we test this feature works correctly?” perspective.
    • The Business Owner or Sponsor, taking a “how can we minimise costs and maximise profits?” perspective.
    • The DevOps Engineer persona, taking the “what’s the impact on our infrastructure?” perspective.

    Finishing up then, and to re-state my earlier point, the questions above aren’t intended to be used as a checklist for conversations with your engineers. I doubt they’d appreciate that! The heuristics and example questions can and should be used as prompts for conversations you may wish to have, if you’re not fully confident in your ability to have a technical discussion with your engineer(s), but want to make sure you’ve covered the necessary bases before they go full steam-ahead with actually building something, taking up valuable time and resources in the process. 

    The questions will help you to ground your discussions in data, rather than assumptions; identify metrics or measurements you may be missing & create measurable success criteria; help make tradeoffs more explicit; and generally move you in the direction of a more evidence based development culture. 

    Let’s cut to the chase though.

    What quality issues actually matter?

    Or, to put it another way: What quality issues do you really need to concern yourself with, as a non-technical founder? 

    Taking into account limitations on time and resources, to a greater or lesser degree, you’re going to want to hone in on the most important things. In my experience, those are issues which directly impact your ability to:

    1. Deliver value to your users or customers,
    2. Keep your business running smoothly, and
    3. Scale without breaking.

    Putting in place measurements and leading indicators of revenue-threatening issues which may prevent users from paying you, features that have a direct bearing on user retention or that drive away potential customers, or performance issues affecting core functionality, will go a long way towards keeping your business on-track.

    Additionally, you need to maintain a trust relationship with your customers. That means staying on top of data security & integrity, uptime, and SLA’s relating to other core (and perhaps somewhat mundane) and customer facing or impacting activities like payment processing. 

    Finally, and keeping the longer term view in mind, you need to keep tabs on growth-limiting and cost-multiplying factors: scalability bottlenecks, technical debt, infrastructure limitations, bugs or performance issues that generate excessive demands on your support operations, or take disproportionate development time and effort to fix. 

    Everything else? It’s probably negotiable. In the words of Voltaire, perfect is the enemy of good, and as a founder, your job isn’t to build a perfect product -– it’s to build something valuable enough that customers will pay for it, and reliable enough that they’ll keep using it.

    Quality should enable growth, not hinder it.

    Use the templates and conversation heuristics I’ve presented above to stay focused on these critical quality issues, and don’t let perfect be the enemy of good enough. Your goal as a founder is to identify the sweet spot where quality supports growth rather than hinders it, and stay laser focused on your business objectives. 

    If I can help with that, feel free to reach out.

  • How I Built My Own Facebook CRM

    How I Built My Own Facebook CRM

    As I talked about in my previous article, building out a lead generation and CRM process for your business doesn’t need to be a heavyweight activity. But that doesn’t mean it’s without friction. I quickly learned this when I began doing Facebook-based lead generation for my own business.

    Initially, I was getting some traction — people were responding to my outreach and I was making connections. But the process was clunky, time-consuming, and didn’t scale well. It felt like I was spending more time manually managing the process than actually engaging in valuable conversations.

    The Old Way: Manual and Messy

    Here’s how things looked when I first started:

    I’d find a Facebook group that looked relevant — maybe it catered to freelancers, founders, or people in a particular niche that matched the kinds of clients I typically work with. Then I’d scroll through the members list and send out friend requests to people who seemed interesting.

    Once they accepted, I’d reach out via Messenger to strike up a conversation and see whether there was any synergy between their needs and my services. It wasn’t a bad approach — but it wasn’t very efficient either. I was making connections, but I wasn’t being particularly targeted, and I had no system for tracking who I’d messaged, who responded, and where any given conversation was at.

    A Smarter Approach: Semi-Automation

    That’s when I decided to build a Chrome extension to help me out.

    Now, Facebook understandably limits what you can automate. So I didn’t try to build a full-on bot. Instead, I created a tool that lets me manually trigger the key steps in my process — which still gives me structure and efficiency without breaking any rules.

    Once I’ve selected a group, I simply:

    1. Click the extension icon
    2. Hit “Scan Members”
    3. The extension extracts public profile data and syncs it to a connected Google Sheet

    No scraping behind the scenes. No background scripts. It only runs when I tell it to.

    End-to-End: From Group to CRM

    The beauty of this system is that it now gives me an actual leadgen workflow I can build on. Here’s what that looks like:

    Step 1: Find a relevant group I still start by finding groups where my ideal clients are likely to hang out. This part hasn’t changed. But now I’m more intentional about which groups I focus on.

    Step 2: Extract the contacts With one click, I scan the group’s member list. The extension pulls names, profile URLs, descriptions (usually job titles), and friendship status. All publicly visible, all added to a Google Sheet.

    Step 3: Organise and filter in Google Sheets Now that I have structured data, I can apply filters. Want to focus on founders? Consultants? UX designers? I can easily filter the list and flag promising leads.

    Step 4: Reach out to a refined list of contacts Instead of messaging everyone, I now have a curated list. That means my outreach is more personal, more relevant, and more likely to land.

    Step 5: Track conversations I use the same Google Sheet to log my outreach — when I messaged someone, whether they replied, what the follow-up should be. It’s simple but powerful.

    Step 6: Iterate and improve Because I have data, I can now iterate. Which groups produce the best leads? What kinds of messages get the most responses? Everything gets better with a little structure.

    Why This Changed Everything

    Previously, I was working from memory and intuition. Now, I have a repeatable process that saves me time, keeps me organised, and gives me far more clarity on what’s actually working.

    It’s not a complex system. But it works.

    And because it’s built around tools I already use — Facebook and Google Sheets — there’s no learning curve and no need to adopt yet another CRM platform.

    Want to Try It?

    You can install the extension from the Chrome Web Store: Chrome Web Store Listing

    Want setup instructions and more? Head to: https://sjpknight.com/apps/fb-leads

  • The Art of Letting Go: A Stoic Approach to Mastering Delegation in Management

    The Art of Letting Go: A Stoic Approach to Mastering Delegation in Management

    Recently I’ve been thinking a little about delegation. I wanted to address some stumbling blocks I’ve faced myself, and give a little thought to how they can be overcome. As usual, mainly with a view towards helping me in my own efforts to be a better manager. Hopefully they’ll be of some service to you too.

    Reasons why you may be bad at delegation

    First up, what are some common reasons as to why someone like me might be bad at delegation in the first place? Well… There’s a bunch of them, and a quick trawl of the interweb will yield a list much like the following one:

    1. Lack of Trust in Team Members
    2. Desire for Control
    3. Fear of Losing Relevance or Job Security
    4. Perfectionism
    5. Guilt About Adding to Others’ Workloads
    6. Lack of Time to Train or Explain
    7. Lack of Clear Processes or Guidelines
    8. Concern About Being Perceived as Lazy or Uninvolved
    9. Failures to delegate effectively, or at all

    I’m pretty sure I’ve fallen afoul of some, if not all of the basic failures above.

    A failure to delegate effectively will likely be underscored by a constantly overflowing inbox full of emails, an endless queue of tickets, and a constant sense of anxiety and/or dread about all the things that haven’t been done yet and may never actually be gotten around to. A large number of open threads, adding to cognitive load and in the worst case, creating anxiety and affecting sleep.

    No doubt another quick search of the interweb would generate several how-to guides, teaching me some pro-tips for delegating more effectively. The real question for me though, is why I’m so reluctant to let go of my work in the first place. After all, it’s not as though I don’t have plenty of other things I can focus on, and my mental health would likely benefit as a result since I’d be less anxious and sleep better at night.

    I came up with a shortlist of my own fears and failings related to delegation below. If you’re a fellow sufferer, yours will no-doubt differ in the specific details, but the outcome (a failure to delegate effectively) will likely be the same.

    Since there’s a fair bit of overlap between my personal reasons for failing to delegate effectively, and the generic list I mentioned earlier, I put the list into a table to show the overlap more clearly:

    my reasons for being bad at delegation

    Strangely, this gives me some degree of comfort, since it indicates my specific fears and challenges are somewhat common, and there’s likely a bunch of actions I can take or some systems/processes I can implement that will improve my abilities in these areas.

    How to get better at delegation

    Before I got into strategies to tackle specific failings, I wanted to understand whether there were some general principles or heuristics I could apply that would help provide some direction in this area. Since delegation is something of a life skill (I delegate responsibilities around the home to my children, for example), I cast my net towards philosophy in general, and stoicism in particular since it majors on the areas of wisdom and judgement.

    • Wisdom and Judgment: Stoicism teaches that wisdom and good judgment are crucial virtues. A Stoic leader would use wisdom to delegate tasks appropriately, taking into account their own strengths and weaknesses, as well as those of their team members.
    • Control and Acceptance: Stoic philosophy emphasises an understanding of what is within our control and accepting what is not. In terms of delegation, this could translate to a recognition of when it is necessary to delegate tasks to others and an acceptance that I can’t do everything myself.
    • Duty and Responsibility: Stoic philosophers believe in fulfilment of duty and taking responsibility for one’s role in life. A leader following Stoic principles would feel a sense of duty to delegate tasks as necessary to ensure the success of their team or organisation, while also taking responsibility for the overall outcome.
    • Emotional Resilience: Stoicism teaches the importance of being resilient and unswayed by external circumstances. This could be taken to mean not allowing pride or the desire for control preventing me from delegating tasks when doing so is in the best interests of the team or project.

    Armed with a little stoic background then, I can devise a few solutions to my own specific challenges, which seem to be largely grounded in a lack of confidence in my own abilities, and a resulting fear of failure. Here’s a few ideas I came up with, where the ideas to the left (on lighter yellow sticky’s) feed into an overall summary or strategy for dealing with the issue on the right (darker yellow).

    fears preventing effective delegation

    All of which lead to a couple of overriding perspectives on how to mitigate my worst failings in the area of delegation:

    Focus on what can be controlled: Focusing on my actions and attitudes towards delegation, rather than the outcomes will help me to control how I delegate, how I communicate, and how I respond to the results. I can control my own actions, but not those of others.

    Embrace my challenges as opportunities for growth: Viewing the challenges of delegation as opportunities to develop my leadership skills will likely help my team members grow and develop their own skills.

    Ta-da! Delegation problems solved, right? If only it were that simple! It’s a start though. I’ll let you know how I’m getting on in a future post, since it’s looking like mastering this art is going to be critical to my future product management success.

    As long as you live, continue to learn how to live.
    – Seneca

  • How to Learn a New Product as a PM: Advanced Tips

    How to Learn a New Product as a PM: Advanced Tips

    Black and white comic book-style illustration, inspired by classic graphic novels. A focused product manager sits at a desk with floating interface

    While I’m still getting to grips with my new writing cadence, I’m in the process of figuring out what best to write about and how. Thankfully, the process was made somewhat easier this week when I saw a question asking how best to get onboarded with a new product as a product Manager. I figured the question would best be answered by an “ultimate guide” style post on how to get to grips with a new product as a PM – particularly since, straight off the bat, I thought about my answer to the question posed in terms of principles or heuristics, rather than specific techniques or tooling (though I do address those things below).

    In the article, I’m going to talk about the general approaches up-front, largely drawing upon my experiences of working as a QA prior to becoming a PM, where I applied the tenets of curiosity, exploration and creativity on a more or less daily basis (though the same might be said of my career as a PM). Then I’ll get into the specifics of some of the methods and tools I might use to support those activities; again, drawing on my QA experience of seeking to automate as much of my work as possible (something I’m sadly not able to do so much of as a PM!)

    Curiosity: Asking the Right Questions

    In the testing community there was a meme for a while that basically turned the Quality Assurance acronym of “QA” on its head, on the basis that testers don’t and can’t assure quality as such. Instead they should focus on being Question Askers, because that’s one of the best ways to uncover risks and identify ways in which they can be addressed.

    I’d be inclined to take a similar approach to learning about a new product, focusing on the following areas:

    • Directly Interacting with the Product: Using the product as a customer or end-user would (taking into account those aren’t always the same people). Probe its features, strengths, and weaknesses, always asking, “Why was this designed this way?” and “What problem does this solve?”
    • Engaging with Stakeholders: During meetings with team members and key stakeholders, ask open-ended questions. Learn the ‘why’ behind the product’s current state, the decisions made, and the metrics for success (past & present). Seek to understand their aspirations for the product and the pain points they’re facing.
    • Engaging with Customers & End Users: Delve into customer feedback channels, support tickets, and user research reports with a focus on understanding the user’s voice and experience. What delights them? What frustrates them? Why?
    • Digging into Business Metrics: Question and seek to comprehend the key performance indicators, financial metrics, and historical data. What story do these numbers tell about the product’s journey and its market acceptance?

    Exploration: Seeking Information Broadly and Deeply

    In my experiences of working as a tester, a trait that I admired in the best testers I got to work with or otherwise learn from, was that they really understood how to explore something – broadly and deeply. If you look around in the testing field, you’ll find plenty of books, courses, articles, talks and workshops entirely devoted to the theory and practice of exploratory testing.

    I don’t see that so much in the product community. Without doing a disservice to testers, having spent a number of years now working as a PM, I do genuinely feel that being a PM is a much tougher gig, primarily due to the need to wear so many different hats and often be responsible for a huge number of moving parts, mostly without any corresponding authority. Nevertheless, I think the art and craft of exploration is a skillset that can be brought to bear as a PM, both while onboarding with a new product, and even or arguably especially, once you’re a few years into the job. Here’s a few ideas you can explore further (excuse the pun!)

    • Explore the Market and Competitors: Take a look around the wider market landscape and figure out who your direct and indirect competitors are. Try to understand where your product fits and what differentiates it from them. Look for trends in the marketplace, potential disruptions, and regulatory impacts if they’re applicable in your product space.
    • Explore the Technical Landscape: Familiarise yourself with the technology stack, architectural decisions, and technical debt. Try to understand not just how the product works, but how it’s built and maintained, and what technical challenges or opportunities lie ahead.
    • Take a deep dive into the product analytics: Explore user data, behaviour analytics, and segment performance. Try to go beyond the surface to understand not just what the data is showing, but in order to generate potential hypotheses for behaviours and areas that might warrant further investigation.

    Creativity: Envisioning Whats Possible

    Of course, the whole idea of having a PM in the first place is so they can create a roadmap into the future and execute on it, right? Well, perhaps not entirely, but it’s a big part of the role nonetheless, and requires some exercising of the product manager’s creative faculties. Some specific applications of creative thinking, particularly as it relates to learning about a product and identifying quick wins, below:

    • Adopt a Growth Mindset: This means approaching problems and opportunities with a mindset that there’s always a way to improve. You can facilitate this with your team by encouraging brainstorming and being open to ideas that challenge the status quo, fostering an culture of innovation.
    • Identify Quick Wins: Try to identify some low effort, high impact quick wins, demonstrating immediate value and potentially freeing up resources for longer-term strategic initiatives.
    • Reimagine the Roadmap: Think about the current product roadmap creatively. Don’t just take it at face value; instead, try to imagine different scenarios, prioritise initiatives based on potential impact, and consider some bold moves that could significantly advance the product’s objectives.

    Tools, Apps, Mental Models, Frameworks and More!

    During my time as a QA, I learned quickly that I needed to underpin my activities with various tools, models, approaches (or frameworks) and other resources to ensure I was being as effective as I could. Particularly during my latter testing days when I worked as a consultant & freelancer and would often be jumping from one client to another during the course of a day or week. I needed to know what tools were available to support my work, and I needed to know how best to use them to maximise both my effectiveness and my efficiency.

    Obviously the same thinking applies to product management; perhaps even more so. While it’s substantially more difficult to automate work as a PM (though… I smell another article coming in that regard!), knowing what tools to use for which activity, and how is clearly going to make life a lot easier. Here’s a few ideas for of my go-to tooling, for the activities I already listed above:

    • Product Interaction:User Testing Platforms (e.g., UserTesting, Lookback.io): You can use these to watch real users interacting with your product and observe points of confusion or satisfaction.Feedback Platforms (e.g., Canny, UserVoice): These tools allow you to gather, organise, and analyse user feedback systematically.
    • Stakeholder Engagement:Communication Tools (e.g., Slack, Microsoft Teams): Use these for daily interactions with team members and stakeholders, and to browse past communications for context.Meeting Platforms (e.g., Zoom, Google Meet): Facilitate one-on-one or group meetings to discuss expectations, concerns, and aspirations for the product.
    • User Understanding:CRM Software (e.g., Salesforce, HubSpot): Review customer profiles, interaction histories, and feedback.Survey Tools (e.g., SurveyMonkey, Typeform): Directly gather user feedback on specific aspects of the product or its features.Pro tip: Nothing beats actually speaking to your customers & users directly.
    • Business Metrics Review:Data Visualisation Tools (e.g., Tableau, YellowFin): Use for an interactive exploration of KPIs and other relevant metrics.Spreadsheet Software (e.g., Microsoft Excel, Google Sheets): Essential for analysing raw data, running your calculations, or building custom reports.
    • Market and Competitor Analysis:Market Research Platforms (e.g., CB Insights, Gartner): Access market reports and industry analyses for a broad understanding of the market landscape.SEO and Competitive Analysis Tools (e.g., SEMrush, Ahrefs): Understand competitors’ digital presence, keyword strategies, and user sentiments.
    • Technical Landscape Understanding:Documentation Tools (e.g., Confluence, Notion): Review architectural decisions, technology stacks, and known issues or technical debts documented by the engineering team.Code Repositories (e.g., GitHub, Bitbucket): Even without deep technical expertise, browsing repositories can give you a sense of the project’s scale, complexity, and organisation.
    • Deep Dive into Analytics:Web Analytics Tools (e.g., Google Analytics, Pendo): Analyse user behaviours, traffic sources, and conversion rates.Heatmap Tools (e.g., Hotjar, Crazy Egg): Visualise where users are focusing, clicking, or dropping off on your site or app.
    • Growth Mindset Adoption:Idea Management Software (e.g., Aha!, ProdPad): Use for brainstorming, capturing, and prioritising ideas from all stakeholders.Mind Mapping Tools (e.g., MindMeister, XMind): Helpful for brainstorming sessions, laying out complex ideas, and planning.
    • Quick Wins Identification:Project Management Tools (e.g., Jira, Monday): Break down quick wins into manageable tasks, assign them to team members, and track progress.Feature Flagging Tools (e.g., LaunchDarkly, Split): Test new features with a subset of users to validate potential quick wins before broad rollouts.
    • Roadmap Reimagining:Roadmapping Software (e.g., Roadmunk, Jira Product Discovery): Visualise the product roadmap, explore different scenarios, and communicate strategic plans effectively.Storyboarding Tools (e.g., Miro, InVision): Visualise user journeys, potential new features, or future state scenarios.
    • Lean Principles Application:Rapid Prototyping Tools (e.g., Figma, Balsamiq): Create MVP versions of new features or products for user testing and feedback.Hypothesis-Driven Development Frameworks (e.g., Lean Startup): Use a structured approach to formulating hypotheses, conducting experiments, and applying learnings.

    One of the questions you might reasonably ask throughout the process of questioning, exploring and generating creative insights relating to your product is, what do I do with all the information I’m generating? I’m personally a fan of using mind maps for a lot of my work (listed in the growth mindset section above), and I’ve found that they can be extremely useful for the joint purposes of exploring and analysing, whilst also documenting your thinking and the decisions you’ve made along the way (in the form of nodes, branches, notes, clippings etc).

    As a matter of interest, that’s exactly how I approached the creation of this article; by identifying the major areas I wanted to focus on; curiosity, exploration and creativity, and then branching out further into how those areas decomposed into sets of activities and the tooling to support them. You can see what that looks like below.

    Thanks for reading – see you in the next one!

  • Thinking About Product Strategy: Processing Signals from the Changing World

    In my last entry I had narrowed down my view of the Changing World (insofar as I have modelled it) such that it looked like this:

    getting granular with testing

    And what I had established was that in order to meaningfully stay up to speed with changes in the world, you have to place some constraints upon the scope of what you’re going to look at – because otherwise there’s simply too much stuff going on and you’ll be overwhelmed by it all.

    So for my purposes, I want to focus primarily on the software testing and test management industry, which is a sub-class of the software industry:

    the software testing industry is a subclass of the software industry

    That industry (or marketplace in which I’m interested) is comprised of the set of customers and vendors that operate within it. The stuff that I am interested in are the additional factors which may influence activity within the marketplace or industry:

    influences within the marketplace of interest

    Clearly there could be a great number of other factors to take into account, but when modelling you have to stop somewhere, right?

    From my model, it seems apparent that there are three main areas on which I can focus in order to build a picture of what’s happening in the marketplace in which I am interested:

    1. Customers
    2. Competitors
    3. Other influences

    So, how do I find out relevant information about those areas?

    Customers

    For customers, there’s a simple but not easy answer… Talk to them!

    You’d imagine that this would be easy enough. But the challenge I personally experience (and some other PM’s may identify with this) – is that it’s actually quite tough to pin them down to a phone call or a Zoom discussion. And having a one-on-one meeting with the customer is by far the most useful kind of interaction in my experience.

    Other forms of customer interactions are usually by way of surveys, or may be in the form of feedback from other business departments (Customer Support, or Customer Success, or Sales typically) or from other situations such as conferences, meetups, webinars and the like.

    Depending on the technology stack, there’s also the possibility of feedback from within the product itself, by way of user-tracking or other forms of monitoring.

    So, signals from the customer (for me) look like this:

    • Direct interactions (meetings on the phone or Zoom)
    • Survey feedback (NPS or other survey types)
    • Feedback from busines channels
    • Event feedback
    • In product monitoring

    Competitors

    The next big area to try and understand is what competitors are doing in the marketplace.

    For me, this is even tricker and time-consuming than trying to elicit information from customers. Generally speaking, customers are pretty happy to tell you what they’re thinnking about and will often do so in no uncertain terms! Customers have a vested interest in improving the product, particularly if they have already parted with their cash.

    Competitors on the other hand – not so much! They will actively try to hide information so as not to broadcast their product strategy and intent.

    Fortunately, there are some relatively well establiushed mechanisms for analysing competitors in order to glean needed information. Some sources of useful data include the following:

    • Industry publications
    • Case studies
    • Corporate info aggregation sites such as Owler, or Hoovers
    • Press releases
    • Company blogs
    • The competitor product itself (through analysis or reverse engineering)

    Once all that information has been gathered, you can start to turn it into a SWOT (Strengths, Weaknesses, Opportunities, Threats) model. There’s any number of resources on the interweb about what a SWOT is and how to do it, so I won’t dwell to much on it here. Except to mention that once you’ve gathered the necessary SWOT information about any competitors, it can be a good idea to consolidate it into a single view of all that data, so that you can use it formulate attack and defend vectors, as well as to identify potential opportunities (per Steven Haines PM Desktop Reference):

    consolidation of SWOT data to identify attack and defend vectors

    Which makes a lot of sense to me, hence reproducing the model.

    Furthermore, Haines goes on to recommend an additional series of questions for delving deeper into competitor operations:

    • How is the competitor company operated?
    • How does the competitor actually produce their product?
    • Via what channels does the competitor distribute their product?
    • By what means does the competitor promote and sell their product?
    • How does the competitor service and support their customers?
    • What technologies are primarily used in the competitor product?
    • What does the employee situation or culture look like for the competitor?
    • What (if anything) are they communicating to any pertinent regulatory or government bodies?

    Other influences

    There’s a final area, other external influences, which warrants at least a little bit of attention. There’s not really too much I can say about this though, other than to pay attention to the world around your area of focus (remember the earlier narrowing down of that area) in as many ways as makes sense to you.

    Speaking personally, I’m a bit of an information hoover, and will suck up information from anywhere I can find it. But as mentioned previously, that comes at the risk of overwhelm. The challenge is knowing when to stop. Which is what, I hope, the development and application of my model will help me with – once I’ve refined it some more.

    Unfortunately, one thing it won’t help me with, is time.

    Specifically, finding time to do all of the research implied by the various activities above, while still delivering on all the other PM activities expected from me…

We use cookies in order to give you the best possible experience on our website. By continuing to use this site, you agree to our use of cookies.
Accept
Privacy Policy