Tag: product management

  • The Non-Technical Founder’s Tech Discussion Survival Guide

    The Non-Technical Founder’s Tech Discussion Survival Guide

    If you don’t come from a development background, building a product can feel like learning a new language while running a marathon. Developers talk in terms of APIs, databases, and deployment pipelines, while you’re focused on customers, strategy, and revenue. It’s all too easy for your business goals to get lost in translation.

    Here’s the truth though: You don’t need to know how to write code to be a great product leader. You do need to be able to communicate effectively with your developers, make informed decisions about what quality looks like, and be able to create a shared understanding with them of what success looks like.

    This guide is about bridging the developer-founder gap. Not by trying to “outsmart” your developers, but by learning how to work alongside them more effectively. I’m going to cover:

    • How to understand and discuss technical topics, without learning to code.
    • Simple scripts and templates to make communication smoother (with references to more comprehensive examples).
    • Some key questions that will help you steer your product without micromanaging.
    • Lessons I’ve learned the hard way about working with development teams.

    By the end, you’ll have some tools to help you collaborate more effectively, avoid some common pitfalls, and build a stronger relationship with your technical team. All without needing to be technical yourself.

    Sound good? Let’s go!

    Speaking Tech Without Writing Code

    Having worked in the tech industry myself for a couple of decades now, it’s easy at this point to take what I know for granted. I’ve effectively been apprenticed by the teams I’ve worked on for all that time, after all, roaming from project to project, org to org, tech stack to tech stack. Reflecting back, I definitely feel some embarrassment about my early efforts, not really knowing what I was doing and worse, not yet knowing what I didn’t know. 

    My ignorance mainly revolved around not knowing what was being talked about (not being familiar with much of the verbiage, terminology and acronyms, some of which were specific to the companies and domains I was working in), and consequently not knowing what I myself was talking about, because I didn’t yet have a solid grasp on what the words being used meant in context.

    Nor did I have a mental model of how to string them together so I could successfully arrive at an understanding or agreement when speaking with developers. 

    Oof! Said like that, it sounds like I had a major set of problems on my hands.

    Maybe you’re sympathetic to the plight of my earlier self, or experiencing something similar? After all, as a non-technical founder, you’ll almost certainly finding yourself talking with developers about:

    • Product requirements – You describe what you want, but the developer asks for technical details you don’t have. E.g., “Do you want this as a synchronous or asynchronous process?” (Uh… what?)
    • Scalability – You know the app needs to handle more users, but the developer asks about load balancing, caching strategies, and database sharding. (Wait, aren’t those things the developers’ job to figure out?)
    • Integrating with third-party tools – You assume adding Stripe or Twilio is plug-and-play, but the developer starts talking about API authentication, webhooks, and handling edge cases. (I thought APIs were supposed to make things easier?)

    And likely many other topics besides… 

    How can you discuss those things clearly and confidently, without understanding the underlying tech stack they’re using?

    Strategies for Speaking Tech with Confidence

    I’m not going to lie… Going from not knowing much, to knowing enough to hold your own in a technical discussion is no easy feat. But if you apply some of the ideas & strategies below, you can get there a lot faster than might otherwise be the case.

    1. Build a Mental Model, Not Just a Vocabulary

    The first mistake I made was thinking that learning tech terminology was enough. I’d memorise acronyms and concepts but still feel lost because I didn’t understand how they connected to each other. Instead of focusing on individual terms, try to develop a big-picture understanding of how software products are built:

    • Frontend vs. Backend – The user interface (what people see) vs. the engine running behind the scenes.
    • Databases – Where information is stored and retrieved.
    • APIs – How different systems communicate with each other.
    • Infrastructure – Where everything runs (cloud hosting, servers, deployment).

    Once you have a high-level framework (a mental map) in place, it becomes much easier to figure out how to place new terminology when you encounter it. You can start to learn about these things by:

    • Reading articles aimed at product managers.
    • Watching and work through YouTube videos on how apps work. 
    • Sitting in on developer discussions and piecing together connections over time.
    • Developing your own knowledge base, using e.g. a mind-map or some other tool that works for you and that enables you to demonstrate meaningful connections between technology concepts. 

    2. Use Analogies to Make Complex Topics Click

    Tech can seem intimidating, but most technical concepts have real-world equivalents that make them easier to grasp:

    • APIs → Like a waiter in a restaurant, taking your request (order) to the kitchen and bringing back what you asked for.
    • Load balancing → Think of a highway—adding more lanes reduces congestion.
    • Caching → Similar to bookmarking a frequently visited page instead of looking it up every time.
    • I have lots more tech analogies… Read a full list of them in my resources area: Analogies for Common Technical Concepts

    When you hear something unfamiliar, try reframing it into an analogy. It may help not only you, but also others who are struggling with the same thing. 

    3. Ask Questions That Get to the Core of the Issue

    Another mistake I made early on was trying to “sound smart” instead of seeking clarity. Developers don’t expect you to be an expert, but they appreciate thoughtful, high-level questions.

    Some great questions to ask in technical discussions:

    • “Can you walk me through how this actually works, in simple terms?”
    • “What’s the simplest version of this feature we could launch first?”
    • “What are the biggest risks we should be thinking about?”
    • “If we doubled our users overnight, what would break first?”

    Instead of pretending to understand, focus on getting to the core of the decision being made. The more conversations you have like this, the faster you’ll pick things up.

    4. Learn to Read (or Write) Technical Documentation

    You don’t need to write code, but learning how to skim and extract key insights from technical documentation is a game-changer. For example, when working with APIs:

    • Look at what inputs and outputs are expected.
    • Check for error handling—what happens when something goes wrong?
    • See if there are real-world use cases or examples that make sense to you.

    Keep in mind that it’s not about mastering the details – it’s about learning to spot the important bits. Where to start?

    • Try reading or the API documentation for tools you use (e.g., Stripe, Jira, or Slack).
    • Try testing out some of the APIs your developers are working on, and sharing your findings.
    • Try writing some documentation for your APIs (and testing out use cases for them at the same time).

    5. Play With No-Code Tools to Build Intuition

    One of the fastest ways to understand how tech works is to tinker with no-code platforms like:

    • Zapier / Make.com (to see how APIs connect different services).
    • Webflow (to understand frontend vs. backend).
    • Retool (to experiment with databases and internal tools).

    Even basic exposure to these tools will drastically improve your ability to talk with developers, because you’ll start to grasp what’s easy vs. what’s actually hard.

    6. Focus on Business Impact, Not the Tech Itself

    At the end of the day, you don’t need to be fluent in developer-speak – you just need to align on business priorities. The best way to do that? Frame discussions in terms of outcomes, not features.

    • Instead of: “Can we add a caching layer to this API?”
    • Ask: “Our response times are slow—what’s the simplest way to speed them up?”
    • Instead of: “Can we use Kubernetes for deployment?”
    • Ask: “What’s the best way to ensure our infrastructure can scale when needed?”

    By staying focused on business goals, you create space for developers to make smart technical choices – without needing to micromanage how they do it.

    Fluent in Tech, Without Writing Code

    At this point in my career, I don’t have to think about how to “speak tech” – it’s second nature. But it wasn’t always that way. Like any skill, it takes time, exposure, and deliberate practice.

    Start with the big picture, focus on asking good questions, and build up your understanding gradually. Over time, the language of technology will become familiar, the technical discussions will make sense, and you’ll find yourself speaking developer fluently – without ever touching a line of code.

  • Product Quality for Non-Technical Founders

    Product Quality for Non-Technical Founders

    Every non-technical founder has been there: Your developer is explaining why something is going to take three weeks instead of three days, using terms that sound like a foreign language. You nod along, wondering if you’re being taken for a ride or if this is really necessary.

    In this guide, I’m going to try and cut through the developer-speak to give you what you really need: practical tools to help you make informed decisions about your product’s quality, without needing to become a developer yourself.

    Quality Basics in Plain Language

    I’m a big fan of philosophy. One of my favourite books of all time is Zen & the Art of Motorcycle Maintenance, in which the author Robert M. Pirsig attempts to define what quality actually is: 

    Something you know when you see it, even if you can’t fully explain it. 

    Those of us dealing with the daily realities of trying to ship a product (like founders, for example) need to get a little more concrete about what our definitions of quality are. You need to actually be able to explain it. 

    A dear friend (Jerry Weinberg) once told me that “quality is value to some person”, so when thinking about how to define what quality is for a given scenario, that’s usually where I’ll start; value.

    Quality is value to some person.

    In the context of a product, or a service, I’ll be thinking about what value is being delivered, and how, and what risks may threaten the delivery of value to the person who cares about it. 

    Threats to value (risks) are going to differ depending on the context in which you’re working. A free to consumers web app is going to have a different risk profile to something that requires payments to be made, and will also have different risks to financial or medical software. Risk profiles may also differ depending on the the technologies used to build the product (software in the examples used) versus how the product is delivered (via the web, or in an app store, or using some other mechanism) versus where it’s ultimately hosted (in the cloud, on the web, on a PC or mobile device). 

    What I’m getting at here is that it can be quite tricky to pin down what quality actually is for your specific context, since it’s liable to change depending on a number of variables:

    • Who your audience is
    • The type of product you’re offering to them, and the business model you’re using to do so
    • How the product is intended to be used, and under what conditions
    • How and what the product is made from, and delivered
    • Regulatory or legal considerations
    • How mature the product is and under what constraints it’s being created

    If you find yourself considering what quality means for your specific product, you’ll likely find it helpful to think in terms of qualities; traits you want your product to have, to make it more appealing to your customers, and ensure they get the value you’re seeking to deliver to them. Qualities such as the ones in the list below:

    • Usability — the provision of a pleasurable user experience
    • Reliability — the ability to use the product without faults or errors
    • Scalability — the product can be used at the speed customers may reasonably expect
    • Security — the product can be used in a way which protects customers (and the business owner) from bad actors who may wish to steal time, data or money
    • Maintainability, deployability, installability [sic], testability, visibility — technical qualities relating to how easy (or not) the product is to develop, maintain, host, monitor and repair

    Believe it or not, there’s an entire industry devoted to the craft of identifying threats to value, and having worked hands-on in that industry (software testing services & products) for the best part of two decades, I’ve supported, consulted and delivered countless quality definitions for all kinds of products, and they’ve always been different depending on the context and the constraints, much as I’ve described above.

    Over the course of time though, I developed some starter-packs for these discussions; checklists which could be applied to jump-start the process for any product or industry, so we didn’t have to start from scratch each time. 

    Simple templates for any product

    Generally speaking, I’d prefer to have a conversation around what quality looks like (what’s valuable to the people who care), but that’s not always possible. When it is, I’ll frame the discussion using sticky notes, or my personal preference, a mind map. When it’s not, carrying out an assessment using a checklist like the one below can be super helpful.

    Core Quality Assessment Templates

    1. Quick Quality Scan (10-Minute Assessment)

    User Impact Score – Score each 1-5 (1=Poor, 5=Excellent):

    [ ] Core function reliability: ___

    [ ] Error frequency: ___

    [ ] Performance speed: ___

    [ ] Data accuracy: ___

    [ ] Interface clarity: ___

    Total: ___ /25 (Action needed if below 20)

    Critical Risk Check – Flag any that apply:

    [ ] Security vulnerabilities

    [ ] Data loss potential

    [ ] Payment issues

    [ ] Legal/compliance gaps

    [ ] Major user complaints

    Any flags require immediate attention.

    2. Feature Readiness Checklist

    Minimum Quality Gates – Must pass all:

    [ ] Core function works consistently

    [ ] Basic error handling exists

    [ ] Performance meets minimum threshold

    [ ] No data corruption risks

    [ ] Basic security in place

    [ ] Accessible to target users

    [ ] Can be supported by team

    Nice-to-Have Quality Aspects – Track progress:

    [ ] Comprehensive error handling

    [ ] Performance optimization

    [ ] Enhanced security features

    [ ] Polished user interface

    [ ] Extended platform support

    [ ] Advanced monitoring

    [ ] Complete documentation

    3. Quality Debt Tracker

    Current Issues – For each issue:

    Severity (High/Medium/Low): ___

    User Impact (1-5): ___

    Fix Complexity (1-5): ___

    Business Cost (1-5): ___

    Priority Score = (User Impact × Business Cost) / Fix Complexity

    Technical Debt Categories – Track by area:

    [ ] Code quality issues

    [ ] Testing gaps

    [ ] Security concerns

    [ ] Performance problems

    [ ] UX inconsistencies

    [ ] Documentation gaps

    [ ] Infrastructure needs

    4. Release Quality Verification

    Pre-Release Checklist – Must complete:

    [ ] Core features tested

    [ ] Regression testing done

    [ ] Performance verified

    [ ] Security checked

    [ ] Data integrity confirmed

    [ ] Support team briefed

    [ ] Rollback plan ready

    Post-Release Monitoring – First 24 hours:

    [ ] Error rate normal

    [ ] Performance stable

    [ ] User feedback positive

    [ ] Support load manageable

    [ ] Systems stable

    [ ] Data flowing correctly

    5. Quality Metrics Dashboard

    Key Metrics to Track – Weekly tracking:

    Error rate: ___

    Performance score: ___

    User satisfaction: ___

    Support tickets: ___

    Technical debt score: ___

    Security status: ___

    Warning Thresholds – React if:

    Error rate > 1%

    Performance drop > 20%

    Satisfaction < 4/5

    Tickets up > 50%

    Debt score > 7/10

    You can use the checklist to assess, and subsequently track the quality of your product at different stages of your development and release cycles, and do those things at different levels of detail. Here’s how:

    1. Start with the Quick Quality Scan. Use it daily/weekly for a rapid diagnosis of your current level of quality. It’s great for non-technical status checks, and It helps spot issues early.
    2. Use the Feature Readiness Checklist when you’re preparing new releases, evaluating feature completeness and/or planning technical work.
    3. Apply the Quality Debt Tracker to prioritise improvements, make technical debt visible & justify quality investments.
    4. The Release Quality Verification helps to ensure systematic pre-release checks, structured post-release monitoring and provide clear go/no-go criteria for decision making.
    5. Finally, the Quality Metrics Dashboard helps track trends over time, identification of metrics in decline and communication of status to stakeholders.

    A key stakeholder group you’ll need to communicate with is your development team. Talking with developers about technical issues can be scary for anyone. Let’s take a look at some ways you can speak with them about the things that are important to you, without sounding like you don’t actually know what you’re talking about.

    How to sound smart when talking to developers

    Honestly, trying to sound smart is entirely the wrong play here. I’ve spent much of my career talking with developers, and even when I do know what I’m talking about and could genuinely add value to a discussion, I prefer to lead the conversation via probing questions instead. Any time I can motivate a developer to go back and revisit their assumptions, or what they otherwise think they know about a problem, by asking an insightful question about it, I’ll take that as a win. 

    So, how do you go about identifying those questions in the first place, for a given release or feature? 

    Well, I think there’s a few heuristics (or rules of thumb) you can call on when having these kinds of conversations. Let’s explore some of them, keeping in mind the caveats of a) some of these questions will be more or less appropriate depending on the specifics of the discussion & b) they’re by no means exhaustive; the suggestions below are intended as a kind of jumping-off point for you to be hopefully inspired by, and to generate your own questions and ideas from. 

    1. The User Journey Heuristic
    • “Can you walk me through how you think a typical user would experience this…”
    • “What happens if the user does X instead of Y?”
    • “Where might our users get confused or stuck?”
    • “How might this feature work for someone who’s never used our product before?”
    1. The Edge Case Explorer
    • “What’s the worst thing that could happen here?”
    • “How does this feature behave when the network is slow/down?”
    • “What happens if we get 100x more users than expected?”
    • “How will we handle failures or degradations?”
    1. The Simplicity Seeker
    • “Could we solve this problem in a simpler way?”
    • “What assumptions are we making that might not be true?”
    • “If we had to ship this tomorrow, what’s the minimal version that would work?”
    • “Which parts of this could we remove without losing core value?”
    1. The Future Proofer
    • “How hard would it be to change this in the future?”
    • “What might make us regret this decision in six months or a couple of years down the road?”
    • “How will this decision affect our ability to scale the product or business in the future?”
    • “Are we working ourselves into a corner by making this decision?”
    1. The Quality Investigator
    • “How will we know if this feature is working correctly?”
    • “What kinds of errors should we expect?”
    • “How can we make problems visible when they occur?”
    • “What kinds of monitoring or alerts do we need to support this feature?”
    1. The Integration Detective
    • “How does this interact with our existing systems?”
    • “What other parts of the system might be affected?”
    • “Are there any dependencies we need to consider?”
    • “What happens if service X goes down?”
    1. The Risk Revealer
    • “What will keep you up at night about this design or solution?”
    • “Where do you think we may run into problems?”
    • “What trade-offs are we making and have we considered alternatives?”
    • “What technical debt might we be creating with this approach?”
    1. The Context Seeker
    • “Help me understand why you chose this approach…”
    • “What other solutions did you consider?”
    • “What constraints are we working with?”
    • “What similar problems have you solved before?”

    If you’re looking at the list above and thinking “there seems to be a lack of data in the questions and answers…” — I agree! You should absolutely be seeking and encouraging your developers to seek out or otherwise utilise data to inform the decision making process when devising technical solutions. Here’s some more [bonus!] heuristics, which can be used to steer the discussion along exactly those lines:

    1. The Data Demander
    • “What metrics will tell us if this feature has been successful?”
    • “How are we measuring X right now?”
    • “What’s our baseline for comparison?”
    • “What metrics can we use to test this approach?”
    1. The Usage Pattern Explorer
    • “What does our usage data tell us about this problem?”
    • “Which users are most affected by this issue?”
    • “Do we have data on how often this scenario occurs?”
    • “What patterns do we see in our logs?”
    1. The Performance Profiler
    • “Do we have metrics for the current response times?”
    • “What does the resource (memory/CPU/IO) utilisation profile look like?”
    • “How many database queries does this generate?”
    1. The Impact Quantifier
    • “How many users/transactions does this affect?”
    • “What’s the business cost of this issue?”
    • “Can we estimate the <business function(s)> hours saved/spent?”
    • “What is the performance improvement in real numbers?”
    1. The Trend Tracker
    • “How has this metric changed over time?”
    • “What growth rate do we need to support?”
    • “Are there seasonal or other historic patterns we should consider?”
    • “What does our historic data tell us about future needs?”
    1. The Resource Reality-Checker
    • “What’s the current load on our systems?”
    • “How much headroom do we have?”
    • “What does our capacity planning telling us?”
    • “Do we have data on resource utilisation?”

    What you may also have observed as you were scanning the heuristics above is that they frame the (hypothetical) conversation through different lenses; presenting different ways of looking at it, which lend themselves to different personas who may be involved in the product development process:

    • The Product Manager persona, taking the “how do we get maximum value from this feature?” perspective.
    • The UX Designer persona, taking the “how will users interact with this feature?” perspective.
    • The Tester persona, taking a “how will we test this feature works correctly?” perspective.
    • The Business Owner or Sponsor, taking a “how can we minimise costs and maximise profits?” perspective.
    • The DevOps Engineer persona, taking the “what’s the impact on our infrastructure?” perspective.

    Finishing up then, and to re-state my earlier point, the questions above aren’t intended to be used as a checklist for conversations with your engineers. I doubt they’d appreciate that! The heuristics and example questions can and should be used as prompts for conversations you may wish to have, if you’re not fully confident in your ability to have a technical discussion with your engineer(s), but want to make sure you’ve covered the necessary bases before they go full steam-ahead with actually building something, taking up valuable time and resources in the process. 

    The questions will help you to ground your discussions in data, rather than assumptions; identify metrics or measurements you may be missing & create measurable success criteria; help make tradeoffs more explicit; and generally move you in the direction of a more evidence based development culture. 

    Let’s cut to the chase though.

    What quality issues actually matter?

    Or, to put it another way: What quality issues do you really need to concern yourself with, as a non-technical founder? 

    Taking into account limitations on time and resources, to a greater or lesser degree, you’re going to want to hone in on the most important things. In my experience, those are issues which directly impact your ability to:

    1. Deliver value to your users or customers,
    2. Keep your business running smoothly, and
    3. Scale without breaking.

    Putting in place measurements and leading indicators of revenue-threatening issues which may prevent users from paying you, features that have a direct bearing on user retention or that drive away potential customers, or performance issues affecting core functionality, will go a long way towards keeping your business on-track.

    Additionally, you need to maintain a trust relationship with your customers. That means staying on top of data security & integrity, uptime, and SLA’s relating to other core (and perhaps somewhat mundane) and customer facing or impacting activities like payment processing. 

    Finally, and keeping the longer term view in mind, you need to keep tabs on growth-limiting and cost-multiplying factors: scalability bottlenecks, technical debt, infrastructure limitations, bugs or performance issues that generate excessive demands on your support operations, or take disproportionate development time and effort to fix. 

    Everything else? It’s probably negotiable. In the words of Voltaire, perfect is the enemy of good, and as a founder, your job isn’t to build a perfect product -– it’s to build something valuable enough that customers will pay for it, and reliable enough that they’ll keep using it.

    Quality should enable growth, not hinder it.

    Use the templates and conversation heuristics I’ve presented above to stay focused on these critical quality issues, and don’t let perfect be the enemy of good enough. Your goal as a founder is to identify the sweet spot where quality supports growth rather than hinders it, and stay laser focused on your business objectives. 

    If I can help with that, feel free to reach out.

  • How I Built My Own Facebook CRM

    How I Built My Own Facebook CRM

    As I talked about in my previous article, building out a lead generation and CRM process for your business doesn’t need to be a heavyweight activity. But that doesn’t mean it’s without friction. I quickly learned this when I began doing Facebook-based lead generation for my own business.

    Initially, I was getting some traction — people were responding to my outreach and I was making connections. But the process was clunky, time-consuming, and didn’t scale well. It felt like I was spending more time manually managing the process than actually engaging in valuable conversations.

    The Old Way: Manual and Messy

    Here’s how things looked when I first started:

    I’d find a Facebook group that looked relevant — maybe it catered to freelancers, founders, or people in a particular niche that matched the kinds of clients I typically work with. Then I’d scroll through the members list and send out friend requests to people who seemed interesting.

    Once they accepted, I’d reach out via Messenger to strike up a conversation and see whether there was any synergy between their needs and my services. It wasn’t a bad approach — but it wasn’t very efficient either. I was making connections, but I wasn’t being particularly targeted, and I had no system for tracking who I’d messaged, who responded, and where any given conversation was at.

    A Smarter Approach: Semi-Automation

    That’s when I decided to build a Chrome extension to help me out.

    Now, Facebook understandably limits what you can automate. So I didn’t try to build a full-on bot. Instead, I created a tool that lets me manually trigger the key steps in my process — which still gives me structure and efficiency without breaking any rules.

    Once I’ve selected a group, I simply:

    1. Click the extension icon
    2. Hit “Scan Members”
    3. The extension extracts public profile data and syncs it to a connected Google Sheet

    No scraping behind the scenes. No background scripts. It only runs when I tell it to.

    End-to-End: From Group to CRM

    The beauty of this system is that it now gives me an actual leadgen workflow I can build on. Here’s what that looks like:

    Step 1: Find a relevant group I still start by finding groups where my ideal clients are likely to hang out. This part hasn’t changed. But now I’m more intentional about which groups I focus on.

    Step 2: Extract the contacts With one click, I scan the group’s member list. The extension pulls names, profile URLs, descriptions (usually job titles), and friendship status. All publicly visible, all added to a Google Sheet.

    Step 3: Organise and filter in Google Sheets Now that I have structured data, I can apply filters. Want to focus on founders? Consultants? UX designers? I can easily filter the list and flag promising leads.

    Step 4: Reach out to a refined list of contacts Instead of messaging everyone, I now have a curated list. That means my outreach is more personal, more relevant, and more likely to land.

    Step 5: Track conversations I use the same Google Sheet to log my outreach — when I messaged someone, whether they replied, what the follow-up should be. It’s simple but powerful.

    Step 6: Iterate and improve Because I have data, I can now iterate. Which groups produce the best leads? What kinds of messages get the most responses? Everything gets better with a little structure.

    Why This Changed Everything

    Previously, I was working from memory and intuition. Now, I have a repeatable process that saves me time, keeps me organised, and gives me far more clarity on what’s actually working.

    It’s not a complex system. But it works.

    And because it’s built around tools I already use — Facebook and Google Sheets — there’s no learning curve and no need to adopt yet another CRM platform.

    Want to Try It?

    You can install the extension from the Chrome Web Store: Chrome Web Store Listing

    Want setup instructions and more? Head to: https://sjpknight.com/apps/fb-leads

  • The Art of Letting Go: A Stoic Approach to Mastering Delegation in Management

    The Art of Letting Go: A Stoic Approach to Mastering Delegation in Management

    Recently I’ve been thinking a little about delegation. I wanted to address some stumbling blocks I’ve faced myself, and give a little thought to how they can be overcome. As usual, mainly with a view towards helping me in my own efforts to be a better manager. Hopefully they’ll be of some service to you too.

    Reasons why you may be bad at delegation

    First up, what are some common reasons as to why someone like me might be bad at delegation in the first place? Well… There’s a bunch of them, and a quick trawl of the interweb will yield a list much like the following one:

    1. Lack of Trust in Team Members
    2. Desire for Control
    3. Fear of Losing Relevance or Job Security
    4. Perfectionism
    5. Guilt About Adding to Others’ Workloads
    6. Lack of Time to Train or Explain
    7. Lack of Clear Processes or Guidelines
    8. Concern About Being Perceived as Lazy or Uninvolved
    9. Failures to delegate effectively, or at all

    I’m pretty sure I’ve fallen afoul of some, if not all of the basic failures above.

    A failure to delegate effectively will likely be underscored by a constantly overflowing inbox full of emails, an endless queue of tickets, and a constant sense of anxiety and/or dread about all the things that haven’t been done yet and may never actually be gotten around to. A large number of open threads, adding to cognitive load and in the worst case, creating anxiety and affecting sleep.

    No doubt another quick search of the interweb would generate several how-to guides, teaching me some pro-tips for delegating more effectively. The real question for me though, is why I’m so reluctant to let go of my work in the first place. After all, it’s not as though I don’t have plenty of other things I can focus on, and my mental health would likely benefit as a result since I’d be less anxious and sleep better at night.

    I came up with a shortlist of my own fears and failings related to delegation below. If you’re a fellow sufferer, yours will no-doubt differ in the specific details, but the outcome (a failure to delegate effectively) will likely be the same.

    Since there’s a fair bit of overlap between my personal reasons for failing to delegate effectively, and the generic list I mentioned earlier, I put the list into a table to show the overlap more clearly:

    my reasons for being bad at delegation

    Strangely, this gives me some degree of comfort, since it indicates my specific fears and challenges are somewhat common, and there’s likely a bunch of actions I can take or some systems/processes I can implement that will improve my abilities in these areas.

    How to get better at delegation

    Before I got into strategies to tackle specific failings, I wanted to understand whether there were some general principles or heuristics I could apply that would help provide some direction in this area. Since delegation is something of a life skill (I delegate responsibilities around the home to my children, for example), I cast my net towards philosophy in general, and stoicism in particular since it majors on the areas of wisdom and judgement.

    • Wisdom and Judgment: Stoicism teaches that wisdom and good judgment are crucial virtues. A Stoic leader would use wisdom to delegate tasks appropriately, taking into account their own strengths and weaknesses, as well as those of their team members.
    • Control and Acceptance: Stoic philosophy emphasises an understanding of what is within our control and accepting what is not. In terms of delegation, this could translate to a recognition of when it is necessary to delegate tasks to others and an acceptance that I can’t do everything myself.
    • Duty and Responsibility: Stoic philosophers believe in fulfilment of duty and taking responsibility for one’s role in life. A leader following Stoic principles would feel a sense of duty to delegate tasks as necessary to ensure the success of their team or organisation, while also taking responsibility for the overall outcome.
    • Emotional Resilience: Stoicism teaches the importance of being resilient and unswayed by external circumstances. This could be taken to mean not allowing pride or the desire for control preventing me from delegating tasks when doing so is in the best interests of the team or project.

    Armed with a little stoic background then, I can devise a few solutions to my own specific challenges, which seem to be largely grounded in a lack of confidence in my own abilities, and a resulting fear of failure. Here’s a few ideas I came up with, where the ideas to the left (on lighter yellow sticky’s) feed into an overall summary or strategy for dealing with the issue on the right (darker yellow).

    fears preventing effective delegation

    All of which lead to a couple of overriding perspectives on how to mitigate my worst failings in the area of delegation:

    Focus on what can be controlled: Focusing on my actions and attitudes towards delegation, rather than the outcomes will help me to control how I delegate, how I communicate, and how I respond to the results. I can control my own actions, but not those of others.

    Embrace my challenges as opportunities for growth: Viewing the challenges of delegation as opportunities to develop my leadership skills will likely help my team members grow and develop their own skills.

    Ta-da! Delegation problems solved, right? If only it were that simple! It’s a start though. I’ll let you know how I’m getting on in a future post, since it’s looking like mastering this art is going to be critical to my future product management success.

    As long as you live, continue to learn how to live.
    – Seneca

  • How to Learn a New Product as a PM: Advanced Tips

    How to Learn a New Product as a PM: Advanced Tips

    Black and white comic book-style illustration, inspired by classic graphic novels. A focused product manager sits at a desk with floating interface

    While I’m still getting to grips with my new writing cadence, I’m in the process of figuring out what best to write about and how. Thankfully, the process was made somewhat easier this week when I saw a question asking how best to get onboarded with a new product as a product Manager. I figured the question would best be answered by an “ultimate guide” style post on how to get to grips with a new product as a PM – particularly since, straight off the bat, I thought about my answer to the question posed in terms of principles or heuristics, rather than specific techniques or tooling (though I do address those things below).

    In the article, I’m going to talk about the general approaches up-front, largely drawing upon my experiences of working as a QA prior to becoming a PM, where I applied the tenets of curiosity, exploration and creativity on a more or less daily basis (though the same might be said of my career as a PM). Then I’ll get into the specifics of some of the methods and tools I might use to support those activities; again, drawing on my QA experience of seeking to automate as much of my work as possible (something I’m sadly not able to do so much of as a PM!)

    Curiosity: Asking the Right Questions

    In the testing community there was a meme for a while that basically turned the Quality Assurance acronym of “QA” on its head, on the basis that testers don’t and can’t assure quality as such. Instead they should focus on being Question Askers, because that’s one of the best ways to uncover risks and identify ways in which they can be addressed.

    I’d be inclined to take a similar approach to learning about a new product, focusing on the following areas:

    • Directly Interacting with the Product: Using the product as a customer or end-user would (taking into account those aren’t always the same people). Probe its features, strengths, and weaknesses, always asking, “Why was this designed this way?” and “What problem does this solve?”
    • Engaging with Stakeholders: During meetings with team members and key stakeholders, ask open-ended questions. Learn the ‘why’ behind the product’s current state, the decisions made, and the metrics for success (past & present). Seek to understand their aspirations for the product and the pain points they’re facing.
    • Engaging with Customers & End Users: Delve into customer feedback channels, support tickets, and user research reports with a focus on understanding the user’s voice and experience. What delights them? What frustrates them? Why?
    • Digging into Business Metrics: Question and seek to comprehend the key performance indicators, financial metrics, and historical data. What story do these numbers tell about the product’s journey and its market acceptance?

    Exploration: Seeking Information Broadly and Deeply

    In my experiences of working as a tester, a trait that I admired in the best testers I got to work with or otherwise learn from, was that they really understood how to explore something – broadly and deeply. If you look around in the testing field, you’ll find plenty of books, courses, articles, talks and workshops entirely devoted to the theory and practice of exploratory testing.

    I don’t see that so much in the product community. Without doing a disservice to testers, having spent a number of years now working as a PM, I do genuinely feel that being a PM is a much tougher gig, primarily due to the need to wear so many different hats and often be responsible for a huge number of moving parts, mostly without any corresponding authority. Nevertheless, I think the art and craft of exploration is a skillset that can be brought to bear as a PM, both while onboarding with a new product, and even or arguably especially, once you’re a few years into the job. Here’s a few ideas you can explore further (excuse the pun!)

    • Explore the Market and Competitors: Take a look around the wider market landscape and figure out who your direct and indirect competitors are. Try to understand where your product fits and what differentiates it from them. Look for trends in the marketplace, potential disruptions, and regulatory impacts if they’re applicable in your product space.
    • Explore the Technical Landscape: Familiarise yourself with the technology stack, architectural decisions, and technical debt. Try to understand not just how the product works, but how it’s built and maintained, and what technical challenges or opportunities lie ahead.
    • Take a deep dive into the product analytics: Explore user data, behaviour analytics, and segment performance. Try to go beyond the surface to understand not just what the data is showing, but in order to generate potential hypotheses for behaviours and areas that might warrant further investigation.

    Creativity: Envisioning Whats Possible

    Of course, the whole idea of having a PM in the first place is so they can create a roadmap into the future and execute on it, right? Well, perhaps not entirely, but it’s a big part of the role nonetheless, and requires some exercising of the product manager’s creative faculties. Some specific applications of creative thinking, particularly as it relates to learning about a product and identifying quick wins, below:

    • Adopt a Growth Mindset: This means approaching problems and opportunities with a mindset that there’s always a way to improve. You can facilitate this with your team by encouraging brainstorming and being open to ideas that challenge the status quo, fostering an culture of innovation.
    • Identify Quick Wins: Try to identify some low effort, high impact quick wins, demonstrating immediate value and potentially freeing up resources for longer-term strategic initiatives.
    • Reimagine the Roadmap: Think about the current product roadmap creatively. Don’t just take it at face value; instead, try to imagine different scenarios, prioritise initiatives based on potential impact, and consider some bold moves that could significantly advance the product’s objectives.

    Tools, Apps, Mental Models, Frameworks and More!

    During my time as a QA, I learned quickly that I needed to underpin my activities with various tools, models, approaches (or frameworks) and other resources to ensure I was being as effective as I could. Particularly during my latter testing days when I worked as a consultant & freelancer and would often be jumping from one client to another during the course of a day or week. I needed to know what tools were available to support my work, and I needed to know how best to use them to maximise both my effectiveness and my efficiency.

    Obviously the same thinking applies to product management; perhaps even more so. While it’s substantially more difficult to automate work as a PM (though… I smell another article coming in that regard!), knowing what tools to use for which activity, and how is clearly going to make life a lot easier. Here’s a few ideas for of my go-to tooling, for the activities I already listed above:

    • Product Interaction:User Testing Platforms (e.g., UserTesting, Lookback.io): You can use these to watch real users interacting with your product and observe points of confusion or satisfaction.Feedback Platforms (e.g., Canny, UserVoice): These tools allow you to gather, organise, and analyse user feedback systematically.
    • Stakeholder Engagement:Communication Tools (e.g., Slack, Microsoft Teams): Use these for daily interactions with team members and stakeholders, and to browse past communications for context.Meeting Platforms (e.g., Zoom, Google Meet): Facilitate one-on-one or group meetings to discuss expectations, concerns, and aspirations for the product.
    • User Understanding:CRM Software (e.g., Salesforce, HubSpot): Review customer profiles, interaction histories, and feedback.Survey Tools (e.g., SurveyMonkey, Typeform): Directly gather user feedback on specific aspects of the product or its features.Pro tip: Nothing beats actually speaking to your customers & users directly.
    • Business Metrics Review:Data Visualisation Tools (e.g., Tableau, YellowFin): Use for an interactive exploration of KPIs and other relevant metrics.Spreadsheet Software (e.g., Microsoft Excel, Google Sheets): Essential for analysing raw data, running your calculations, or building custom reports.
    • Market and Competitor Analysis:Market Research Platforms (e.g., CB Insights, Gartner): Access market reports and industry analyses for a broad understanding of the market landscape.SEO and Competitive Analysis Tools (e.g., SEMrush, Ahrefs): Understand competitors’ digital presence, keyword strategies, and user sentiments.
    • Technical Landscape Understanding:Documentation Tools (e.g., Confluence, Notion): Review architectural decisions, technology stacks, and known issues or technical debts documented by the engineering team.Code Repositories (e.g., GitHub, Bitbucket): Even without deep technical expertise, browsing repositories can give you a sense of the project’s scale, complexity, and organisation.
    • Deep Dive into Analytics:Web Analytics Tools (e.g., Google Analytics, Pendo): Analyse user behaviours, traffic sources, and conversion rates.Heatmap Tools (e.g., Hotjar, Crazy Egg): Visualise where users are focusing, clicking, or dropping off on your site or app.
    • Growth Mindset Adoption:Idea Management Software (e.g., Aha!, ProdPad): Use for brainstorming, capturing, and prioritising ideas from all stakeholders.Mind Mapping Tools (e.g., MindMeister, XMind): Helpful for brainstorming sessions, laying out complex ideas, and planning.
    • Quick Wins Identification:Project Management Tools (e.g., Jira, Monday): Break down quick wins into manageable tasks, assign them to team members, and track progress.Feature Flagging Tools (e.g., LaunchDarkly, Split): Test new features with a subset of users to validate potential quick wins before broad rollouts.
    • Roadmap Reimagining:Roadmapping Software (e.g., Roadmunk, Jira Product Discovery): Visualise the product roadmap, explore different scenarios, and communicate strategic plans effectively.Storyboarding Tools (e.g., Miro, InVision): Visualise user journeys, potential new features, or future state scenarios.
    • Lean Principles Application:Rapid Prototyping Tools (e.g., Figma, Balsamiq): Create MVP versions of new features or products for user testing and feedback.Hypothesis-Driven Development Frameworks (e.g., Lean Startup): Use a structured approach to formulating hypotheses, conducting experiments, and applying learnings.

    One of the questions you might reasonably ask throughout the process of questioning, exploring and generating creative insights relating to your product is, what do I do with all the information I’m generating? I’m personally a fan of using mind maps for a lot of my work (listed in the growth mindset section above), and I’ve found that they can be extremely useful for the joint purposes of exploring and analysing, whilst also documenting your thinking and the decisions you’ve made along the way (in the form of nodes, branches, notes, clippings etc).

    As a matter of interest, that’s exactly how I approached the creation of this article; by identifying the major areas I wanted to focus on; curiosity, exploration and creativity, and then branching out further into how those areas decomposed into sets of activities and the tooling to support them. You can see what that looks like below.

    Thanks for reading – see you in the next one!

  • On Signals & Noise

    On Signals & Noise

    Black and white comic book-style illustration, inspired by classic graphic novels. A symbolic representation of 'signal vs. noise'

    Most days, while I’m fresh, and with a jug of freshly brewed black coffee to hand, I try to fit in three hours or so of deep work.

    Before I can do that though, I need to check my emails and messages. Not necessarily with a view to responding to all of them, but trying to figure out whether there’s something urgent that requires my immediate attention and/or some kind of action.

    I scan email titles to see whether anything jumps out. I look to see who sent the email, or who they sent it to. If the email is from, or to, someone important (including me of course!), I might open it up and take a peek, to figure out whether it’s going to require some amount of effort to respond to, or whether I can write a quick reply and draw a line under it there and then.

    I take a similar kind of approach to messages in Slack or Teams, applying the following heuristics:

    • What’s the message about?
    • Who is it from?
    • Does it require immediate attention, or can it wait until later?

    The does it require immediate attention point is important. It speaks to prioritisation, which is a critical facet of any knowledge worker type role. And has particular significance in the product space, since the product manager role revolves around being a master of prioritisation. I have to think about how the request fits into my plan for how I’m going to go about my work that day. As a PM, I may also need to think about how it fits into a bigger picture of business priorities, stakeholder requirements, company politics, customer needs, and other dimensions of priority.

    Having duly pondered some or all of the above – I ultimately have to make some kind of decision as to whether the email or message actually requires my attention, how quickly, and what to actually do about it.

    What this amounts to, in case you hadn’t already realised, is signal detection. I’m seeking to optimise my attention for signals, and trying to filter out noise. In the general case, that boils down to 3 steps:

    1. Filtering: In an age of information overload, you, dear reader, are constantly having to decide which pieces of information to attend to and which to ignore. Determining, for example, which email is critical to read now versus which can wait (or be discarded) can be framed in terms of Signal Detection Theory: Hits, misses, false alarms, and correct rejections.
    2. Prioritisation: When selecting which actions or tasks to pursue, your decision will often revolve around incomplete or uncertain information. Your ability to discriminate between what’s genuinely impactful versus what only seems so, can determine the success of an activity or project. At the macro level, it could affect the success of your team, organisation or business.
    3. Decision Making: When you’re faced with ambiguous data, you have to decide on a course of action (including taking no action). For instance, you may have to decide if a change in a data trend is a genuine signal of an underlying issue, or if it’s just random noise.

    A quick primer on Signal Detection Theory (SDT)

    Signal Detection Theory (SDT) is a framework for understanding how decisions are made in the presence of uncertainty, particularly when detecting faint or ambiguous stimuli. The concept emerged from the field of electrical engineering and was then applied to radar signal processing during World War II. Later on, the principles were adopted and extended in psychology to explain how humans and other animals make decisions under conditions of uncertainty. The theory differentiates between the actual state of the world (whether a signal is present or absent) and your decision about that state.

    The key concepts you need to know for the purposes of this article are that a signal detection activity has four possible outcomes: Hits, Misses, False Alarms, and Correct Rejections.

    • Hit: Signal is present, and you correctly detect it.
    • Miss: Signal is present, but you fail to detect it.
    • False alarm: Signal is absent, but you believe it’s present.
    • Correct rejection: Signal is absent, and you correctly identify it as absent.

    The concepts of signal detection are applied in lots of areas, including psychology, medicine and economics, to name a few. They’re generally applicable in day-to-day life (trying not to get run over when crossing the road for example, or literally paying attention to signals if you’re driving a car), in knowledge work and for the purposes of this article, product management.

    How signal detection fits into product management

    Signal Detection Theory (SDT) isn’t considered a product management framework in itself; nor is it explicitly integrated into any widely recognised product management frameworks. However, I wouldn’t be writing this article if I didn’t think that the principles of signal detection couldn’t be incorporated into some of your existing product management practices, and help you to make better decisions in the midst of uncertainty, ambiguity, and change. Areas where I think it could be used include:

    • Prioritisation: Tools like RICE (Reach, Impact, Confidence, and Effort) or WSJF (Weighted Shortest Job First) require decisions based on uncertain or incomplete information. Applying signal detection principles can enhance these tools by helping you to differentiate between genuine insights and random fluctuations or biases.
    • Roadmapping: When you’re planning a product roadmap, distinguishing genuine needs (signals) from transient or less impactful items (noise) is essential. Isolating signals from noise helps to make the distinction more clearly.
    • Research and User Feedback: When generating or analysing user feedback, product managers are aiming to separate genuine patterns and needs (signals) from the variability and noise of less useful feedback.
    • Data Analysis: Metrics sometimes show fluctuations due to a myriad of reasons. For example, our NPS score went up 10 points last week. Why? Is there some important insight (signal) to be drawn from the sudden spike, or is it just noise?
    • Stakeholder Communication: Feedback from stakeholders can be conflicting or reflect organisational politics and biases. Applying heuristics to evaluate which pieces of feedback are strategically important signals and which might be noise, will help ensure that product decisions align with strategic goals.

    Failure modes

    There’s lots of ways to fail at product management. Many of them are related to misclassifying signals as noise and vice-versa. Most of them, I’ve made myself at some point:

    During feature development:

    • I’ve been swayed by loud or influential customers without verifying the broader applicability of their request.
    • I’ve implemented features just because competitors have (or didn’t have them) them without properly assessing their relevance.
    • I’ve massively underestimated technical complexity, leading to maintenance issues and technical debt.

    While gathering user feedback:

    • I’ve prioritised feedback based on volume, rather than quality or relevance.
    • I’ve disregarded feedback from less vocal user groups.
    • I’ve acted on feedback without validating its broader applicability.

    During analysis and research:

    • I’ve wasted time chasing new trends without assessing its longevity or relevance.
    • I’ve neglected to reevaluate past decisions in light of new information.
    • I’ve been too rigid and slow in adapting to significant market shifts.

    When interpreting data:

    • I’ve misinterpreted data without considering external influences.
    • I’ve focused too much on short-term metrics without considering long-term impacts.
    • I’ve implemented changes based on inconclusive results.

    When managing projects:

    • I’ve spread resources too thinly across too many projects.
    • I’ve neglected to revisit and adjust resource allocation as situations have changed.
    • I’ve been influenced by politics or internal pressures rather than objective product needs.

    We’ve had a new PM join us recently, and perhaps because she’s just new to the team, or maybe because of some personality traits or perhaps because she’s just more skilled in some areas than I am, she treats a lot of things that as signals, that I would otherwise disregard as noise. Sometimes she hits, sometimes she misses; but it’s interesting to observe the process because it forces me to reassess my own responses and skills in this area.

    I’m a boots on the ground kinda guy. I love speaking to customers, and I work hard at being responsive to what I hear from them. Equally I listen to what my team and colleagues are saying, and am very responsive to signals that they may be blocked or need support in some way. I have good intuition about problems that are likely to explode and therefore require immediate or escalated attention before they do so.

    I’m not always very good at identifying political signals from the wider business and stakeholders, or at spotting patterns in data that may indicate a trend which requires attention. I sometimes miss important signals or confuse them with noise.

    Given that I’ve identified some weaknesses in my signal to noise detection capabilities, what are some ways I can make some improvements?

    Getting better at signal detection

    Here’s my ideas for helping to develop those skills, if you feel you may be falling short of the mark (as I often do!)

    1. Implement feedback loops: Seek to regularly collect feedback from stakeholders, users, and teams. Reflect on your capabilities and hone in on areas of improvement based on this feedback.
    2. Diversify your Information Sources: Don’t rely on a single source of data. Triangulate your insights from different channels to paint a more accurate picture.
    3. Seek external perspectives: Try to gather viewpoints from outside of your immediate circle. Utilise friends, mentors and peers to gain fresh insights.
    4. Stay user or customer centred: Making sure you’re interacting with users and customers regularly will ground decision-making in the realities of their needs and desires.
    5. Cultivate critical thinking: Question your assumptions regularly. Develop a habit of playing devil’s advocate to challenge prevailing ideas (something I find alarmingly easy!)
    6. Utilise appropriate frameworks & models: Employing decision-making frameworks like RICE (Reach, Impact, Confidence, Effort) or cost-benefit analyses can help to structure thinking.
    7. Practice reflective listening: Make sure you truly understand the feedback or data before making decisions based on it. I literally repeat to people what I think I heard them say and ask for confirmation we’re on the same page.
    8. Accommodate Uncertainty: The reality is that not every decision will have a clear signal associated with it. Sometimes you just have to make the best decision with information available at the time, and be open to iterating based on the outcome. Jeff Bezos coined the one-way versus two-way door analogy, which I think is useful in this regard.
    9. Manage your time: Set aside time for deep work without distractions. Give yourself room for more focused analysis and applied discernment.
    10. Learn continuously: Regularly engage in professional development, courses, and workshops and the like to refine your professional skills.

    There shouldn’t actually be many surprises here, since our brains have basically evolved to help us pay attention to useful information and disregard the detritus. What’s important, particularly in the product realm, and really anywhere your decisions are likely to have a lasting impact, is knowing when you can trust your instincts and think fast, and when you need to slow down and take more measured approach.

    Keeping in mind the hit, miss, false alarm and correct rejection Signal Detection Theory concepts from above, it seems to me that you can probably train yourself to hit more frequently than you miss, avoid false alarms and, critically for the slightly overwhelmed PM’s, know when to make a defensively correct rejection. Hopefully some of the ideas above help you (and me!) to do that.

    If you enjoyed this article, or got some useful ideas from it, I’d appreciate it if you could hit the share button or leave a comment, just to let me know. If you have some ideas for improvements or want me to write about something specific, I’d be happy to hear about that too.

    Thanks, and see you for the next one!

    Sources:

    “An Introduction to Signal Detection and Estimation” by H. Vincent Poor

    “Signal Detection Theory and ROC Analysis in Psychology and Diagnostics: Collected Papers” by John A. Swets

    “Awakening from the Meaning Crisis, Episode 39: The Religion of No Religion” by John Vervaeke YouTube

  • On Unplanned Work

    On Unplanned Work

    Black and white comic book-style illustration, inspired by classic graphic novels. A person sits surrounded by work

    Picture this: You have a plan for a thing your team needs to accomplish. It’s quite complicated. And you’ve planned extensively for it. It’s a cross-functional, multi-phased programme of works, with numerous milestones and dependencies. The plan is somewhat tricky to follow (to the uninitiated), but well documented, and as well understood by the team as it can be, given the various moving parts.

    You’re working the plan. And the plan is working.

    And then, just when you least expect it, along comes a senior leader with a new or revised agenda, based on <some other thing> that’s important to them or the business at that point in time. And just like that… Everything changes.

    Unplanned Work

    As I’m striving to accomplish a writing cadence again, I was struggling a bit with what to write about this week. All of a sudden though, it just dropped into my lap! Figuratively speaking. And not in a nice way. You’ll perhaps recall in my previous post that I used the common analogy of goalposts being moved. Well, this is more like someone having the entire playing field rotated by 180 degrees! I’m talking about a big change here; a huge amount of work and re-planning.

    How does one deal with that, without going slightly mad? Here’s a few thoughts, collated from my years of experiencing, and dealing with the fallout of such changes.

    Prioritise and assess

    Prioritisation, one might convincingly argue, is the bread and butter of any self-respecting product manager. Having also been a freelancer and contractor for much of my career, I think I could make the case that it’s the lifeblood of a successful individual contributor / knowledge worker / creative, also.

    First up then, you’ll need to assess the urgency & impact of the changes. You’ll need to do this to help guide the work and ensure resources are aligned with the new priorities and are mobilised accordingly.

    You may also wish to carry out a gap analysis, to gain some sense of where there may be issues in terms of resourcing, timelines, and scope. You’ll of course want to share your understanding of what this looks like with the leadership team to make sure they are aware of the full picture, and perhaps in some diplomatic fashion with the stakeholder(s) responsible for the changes, so they understand the magnitude and impact of their dictate.

    Risk Management

    Figuring out where the new risks are in relation to the changed plans, in addition to the old ones, which may or may not still exist, or have been mitigated, or in the worst case, compounded by the new plans should be very high on your list of priorities. Obvious risks will likely jump right out at you. More subtle risks will require some digging and analysis.

    One way in which you can approach this is by carry out an Impact Mapping exercise. I’m a big fan of this approach because it’s basically a mind map, and I love mind mapping stuff out because it makes nice and clear and easy to collaborate on. Documenting how these changes affect current in-flight projects and future projects will help you to form contingency plans where they’re needed.

    For bonus points you can carry out a dependency analysis, either as part of your impact mapping exercise, or supplemental to it, identifying new or pre-existing dependencies, along with any other risks and issues.

    Communication and Stakeholder Management

    Blergh! One of my least favourite phrases; stakeholder management. But, a necessary evil.

    Fun fact: I got turned down for a senior role by a decision maker some while back because they didn’t think my stakeholder management skillz were quite up to the level they were looking for. Rejection makes me sad… So, I’ve worked hard to improve my capabilities in this area. For me, it comes down to three key points (at least in the context of changing plans, as I’m discussing here):

    1. Immediate Communication: As soon as new changes are confirmed, they should be communicated to the relevant parties. The sooner the people who care about the plans know about the changes, the faster they can carry out any necessary adjustments to their plans.
    2. Clear Messaging: Effective communication of the changes will depend upon complete clarity as to why the changes are being made. Probably just blaming the person who dictated the changes be made in the first place isn’t the why you’re looking for. A better solution would be to identify the benefits of the revised plan instead.
    3. Open Dialogue: I talked about clear lines of communication in my previous article. It’s worth investing time and energy into cultivating an environment where team members feel comfortable sharing concerns or ideas for making any necessary changes, smoother.

    Leadership and Team Morale

    My team discovered that these kinds of changes were going to be taking place approximately the middle of last week. I wasn’t especially impressed by them; particularly since they were imposed from above, without any consultation on why the changes were being made, what the impact would be and how the impact of the changes would be managed. It’s fair to say, not everyone else on the team was, or will be happy when they find out, either.

    As the lead PM for my team, at least to some degree they look to me for cues on how to react to things. I like to think I try to maintain a positive outlook, taking the view that challenges are also opportunities for growth. I don’t claim to always get it right, but it’s what I aim for and I think the team appreciates it. Being sure to offer empathy and support to the team is a big part of this too.

    Part of staying positive can involve celebrating small wins. The path to the new release schedule will likely have milestones. Celebrating them will help to keep morale high.

    “God Laughs at Your Plans”

    Adjusting to a new plan is never easy; especially when you’ve devoted a lot of time and effort to the old one. Feelings of frustration and disappointment are a natural consequence. But if you’re anything like me, you’ll want to come out on top.

    Prioritising your activities, identifying and addressing risk, communicating the new plans clearly and effectively, and demonstrating leadership throughout the process will help you do that.

    Don’t forget to look after yourself too!

  • How to Handle Change in Your Team

    How to Handle Change in Your Team

    Black and white comic book-style illustration, inspired by classic graphic novels. A small team of diverse people is shown in the middle of a dramatic change

    Something I’ve been thinking about recently is how best to adapt to newcomers on your team; particularly when they have authority over you (i.e. a new “boss” or manager), or when they’re moving the goalposts of your current working practices. If you’ve ever found yourself in this situation, it can be a struggle.

    From previous interactions with more senior managers or the exec team, you may have a good understanding of the politics, their likes and dislikes, and a whole set of other contextual experiences that your new teammate(s) simply haven’t gotten access to yet. Because they haven’t had those experiences, and as a result don’t have the same context you do, they may be doing and asking you to do things in a way that seems incongruent with your current understanding of how things work.

    Sometimes, that’s going to hurt. In the past, I’ve been asked to deliver things, or do things in a certain way, that I knew wasn’t going to work, because of my additional context. Despite arguing from this perspective, I’ve been informed they need to be done anyway. Sometimes, the outcome has surprised me, and there’s been a net improvement. Sometimes, the outcome has been exactly as I predicted, leading to a sense of wasted effort and the resulting feelings of negativity and “I told you so!” which accompany that.

    Clearly going into these kinds of situations with a closed mind or fixed perspective that says something like “this is how it’s always worked in the past and this is how it’s always going to work in the future” is not going to lead to good outcomes over the course of time, or an ideal relationship with those new team members. You’ve got to keep an open mind. You’ve got to constantly be looking for the good, for the positive outcomes the team is seeking, whilst bringing your experience to bear in the most appropriate fashion.

    I’ve found myself in a similar situation recently, hence I’ve been giving it some deep thought. I found the following ideas helpful while navigating through it.

    Ensure clear lines of communication

    Apparently I bias towards “brutal honesty” – or so a coach told me in the recent past. As you might imagine, that’s not always a good thing. But, as Popeye always used to say, “I yam what I yam!” If nothing else, honesty, even the somewhat brutal kind, helps to ensure that communication is, well…, honest. If sometimes a little uncomfortable! What those clear lines of communication actually look like for you, and your team, in the context of your organisation, and your ways of working, and depending on the specifics of your actual work style or relationship, are clearly going to vary somewhat. The important thing here is, that they’re there in the first place. That may require some work on your part to establish, but it’s important work, and you should prioritise it if you’re not already doing so.

    Getting a handle on your new boss’s expectations is going to be critical. The sooner you can get this done the better. Seeking to understand their vision and objectives and how you fit into the new picture is going to set you up for success going forward. Of course, this assumes they want you to stay in the picture! Make it easier for them to put the pieces together by sharing your aspirations with them, and asking for guidance on how you can best align your efforts with the team’s objectives.

    Get feedback

    Once the communication lines are clear, you should think about asking for some feedback. Not everyone is very good at or likes doing this, and it may take some persistence, but it’s worth the ask. The last time I asked for feedback, I got some useful nuggets like:

    • “Be clear, direct and firm in your decisions, but be nice about it. There’s never a reason to be a jerk” – remember that brutal honesty thing?
    • “Learn how to genuinely soften your tone without being passive aggressive or condescending” – again, brutal – I’m working on it!
    • “Become more gracious at accepting processes/decisions that are not your preference” – oops!

    And so on, and so forth. What’s the takeaway? Regularly asking for feedback from both your new manager and peers will definitely provide valuable insights into areas of improvement and may also highlight your strengths.

    Roll with the punches

    One of my favourite quotes of all time, ever, is Mike Tyson’s “Everyone thinks they have a plan until they get punched in the mouth!” (Stolen and paraphrased somewhat from Helmuth von Moltke’s “no plan survives contact with the enemy”).

    Another favourite saying from one of my erstwhile colleagues with a somewhat unique sense of humour is “God laughs at your plans!” Indeed, he probably does.

    What’s the point here? It’s that having a plan is probably a good thing, but sometimes your plans need to change and adapt. If you can’t or won’t adapt in the face of change, something’s liable to break, and it probably won’t be whatever the change is. It’ll be you.

    Change, as they say, is inevitable; especially in the business world. Instead of resisting it, embrace it. This positive attitude will not only benefit you personally but will also be noticed by others, including your new manager. Be willing to adapt to new methods, technologies, or processes introduced by the new manager or team members. Being adaptable shows that you’re a team player and open to innovation.

    Learn continuously

    Learning is my favourite thing, so this one comes easy to me. Again though, your mileage may vary depending on how you feel about continuous learning in the first place (is it a chore, or is it a source of joy?), and how you actually go about it.

    I’ve expanded at length regarding how I go about continuous learning elsewhere (e.g. here), so I won’t go into a massive amount of detail here; suffice to say that I tend to prefer a JIT (Just In Time) approach to learning, where I’m digging out learning resources and utilising them as needed, with a view towards accomplishing specific activities.

    Let’s say for example I need to do some deep data analysis. I’m not a “data analyst”, so this isn’t something I’m used to doing every day, and my skills are therefore lacking. What I need is some pointers to get me started, and a sense of what a good data analysis methodology (or recipe) might look like. In this kind of scenario, I’m going to spend a little time figuring out what the best source of these kinds of instructions are, and then I’m going to go and read-up a bit, or watch some videos, or do a few steps in a course. Just enough to get me started, so I can accomplish the specific task at hand, and so that I have something to refer back to if I start feeling lost.

    There’s plenty of great resources out there for this kind of thing; my go-to tends to be Coursera, for good and reliable (i.e. authoritative) content.

    An additional consideration for this point specifically, is if you’ve got a bunch of new people on your team, or a new manager; don’t be threatened by their knowledge or skills. Use them. The arrival of new and skilled team members is an opportunity to learn from them. Seek out their expertise, ask questions, and consider participating in workshops or courses to enhance your skills. Being proactive about your learning can set you apart.

    Look after yourself

    Above all, be kind to yourself. Don’t beat yourself up about your limitations, real or perceived. Show up each day, do your job, be nice to your co-workers, be as generous as you can be with your time, resources and knowledge. But acknowledge your limitations and your working hours. Use your vacation time. Keep up to speed with developments in your areas of specialist interest, but don’t forget to live your life, spend time with your family and pursue your own interests.

    Amidst all the changes and the drive to advance, don’t forget to take care of yourself. A healthy work-life balance is essential for sustained career growth.

    Reflect

    You could probably argue that the tips above are generally applicable to successful working, and I wouldn’t disagree. But I’ve sought to provide a specific slant and demonstrate how they might be applied to a specific situation: that of being a part of a growing team, and one in which the goalposts are being moved in directions you’re finding uncomfortable. I’ve found that when this happens, as it invariably does at points during a career, reflecting on the ideas above is helpful for me.

    By way of a post-script, I found myself in exactly the situation I described at the beginning, just yesterday. I did a bunch of work on a deliverable, despite having flagged well in advance that it needed to be reviewed by a senior leader ahead of time. What I predicted would happen, happened (spoiler: major revisions), and as a result the deliverable will now be parked for the immediate future, and will in all likelihood need to be completely re-worked once it’s picked back up again.

    Clearly this is a sub-optimal situation, and one that it would be easy to get negative about. But if I turn it around and look for the positive; I learned some things along the way, and I collaborated with the new team on those things successfully. I sought feedback and built clear communication channels. So on reflection, maybe it’s not so bad. I’m gonna go enjoy my weekend, and go into my next working week with a refreshed and positive attitude.

    I hope you do too. See you in the next one!

  • How to get Product Management Stuff Done in the Face of an Endless Barrage of Other Demands and Firefighting Activities

    How to get Product Management Stuff Done in the Face of an Endless Barrage of Other Demands and Firefighting Activities

    Black and white comic book-style illustration, inspired by classic graphic novels. A product manager battles against a whirlwind of incoming tasks

    One of the questions that frequently comes up within the large team of product managers within which I work is, given all of the other problems, issues, questions and tasks I have to deal with day-to-day as a product manager, how can I find the time to do the high[er] value activities that product managers should be spending time on? Things like:

    • Talking to customers
    • Analysing competitors
    • Researching the market
    • Ideating new products and features

    While I don’t pretend to have all the answers, one of the things I think I’ve been pretty good at over the course of my career is focusing on and prioritizing value-adding activities, so I think I do have some things to say here…

    Prioritise it – in the face of all the other stuff that is asked of you, unless you make something a priority it will always go to the back of the queue and in the worst case, never actually get done. I’m in the habit of planning my work in advance, usually on a week to week basis. But as my role is evolving and I take on more senior/lead responsibilities, I’m recognizing the need to plan even further out – months, even years in advance.

    Timebox it – research is one of those things that can never really be considered complete. There’s always another avenue for investigation; always another question to answer. The easiest way to set some boundaries around the potentially infinite space of whatever research needs to be carried out, is to specify the amount of time you’re going to spend on the activity. A spike, in agile verbiage. Start off with by setting aside a couple of days, and see where you get to. If further time and effort is required, you can plan it in from there.

    Constrain it – to a specific question or hypothesis. If you go into the activity with a clear question or hypothesis in mind, you’re less likely to fall into rabbit holes along the way.

    Distribute (or delegate) it – as the PM, you’re responsible for being the subject matter expert for your product(s), but it doesn’t mean you have to know (or are even capable of knowing) everything. Think about ways you can involve other members of your team in the research activities that need to be performed. I like to think of myself as the spider at the center of a web of information (and business relationships); I spin out the threads, and then need to be sensitive and responsive to tremors along them.

    Automate it – some research activities are relatively mechanical, and as such, are good candidates for being automated. Setting up keyword searches and subscribing to useful sources of information are low hanging fruit. It may be possible for you to automate some other activities too (e.g. I use an R script to pull and collate production metrics into a report, eliminating a few steps of manual effort).

    Document it – find a tool or means of capturing the information you gather so that it’s available to you when you need it. Ideally, you want something that allows you to organise the information in whatever way makes sense for your purposes; I mainly use Evernote for this, but there’s plenty of alternatives.

    Be flexible about it – unlike some of the day-to-day tasks and activities I find myself embroiled in…

    • Acting as a scrummaster and release manager
    • Answering questions about new features during the course of development
    • Handling stakeholders and business needs
    • Supplying information to management, sales, marketing and others as needed
    • Writing documentation and producing other collateral such as webinars and blog posts
    • Generally overseeing the product roadmap and making sure everything is on track and everyone has what they need to do their job

    … Research is an activity that can, to some extent, be carried out anytime and anywhere. I always have my phone with me (I’m writing this blog post on it) and therefore have the capability to read, listen to or watch media related to the objects of my research more or less anywhere I am.

    Of course, not everyone will agree with this point. Some people like to have a much clearer line between work and home life, and I certainly understand that perspective. All I’m saying here is what works (and has worked) for me. Being flexible about when and where I am when I’m doing this kind of work affords me many more opportunities to read, and think, and ultimately to add the kind of value that’s crucial for a PM to really succeed in the areas of their job which can easily fall to the wayside, but which are hugely valuable.

    Even if you’re not willing or able to be flexible in this way, hopefully some of the other suggestions above work for you. And if you have some other thoughts (or just completely disagree with me), I’d love to hear from you!

  • Thinking About Product Strategy: Processing Signals from the Changing World

    In my last entry I had narrowed down my view of the Changing World (insofar as I have modelled it) such that it looked like this:

    getting granular with testing

    And what I had established was that in order to meaningfully stay up to speed with changes in the world, you have to place some constraints upon the scope of what you’re going to look at – because otherwise there’s simply too much stuff going on and you’ll be overwhelmed by it all.

    So for my purposes, I want to focus primarily on the software testing and test management industry, which is a sub-class of the software industry:

    the software testing industry is a subclass of the software industry

    That industry (or marketplace in which I’m interested) is comprised of the set of customers and vendors that operate within it. The stuff that I am interested in are the additional factors which may influence activity within the marketplace or industry:

    influences within the marketplace of interest

    Clearly there could be a great number of other factors to take into account, but when modelling you have to stop somewhere, right?

    From my model, it seems apparent that there are three main areas on which I can focus in order to build a picture of what’s happening in the marketplace in which I am interested:

    1. Customers
    2. Competitors
    3. Other influences

    So, how do I find out relevant information about those areas?

    Customers

    For customers, there’s a simple but not easy answer… Talk to them!

    You’d imagine that this would be easy enough. But the challenge I personally experience (and some other PM’s may identify with this) – is that it’s actually quite tough to pin them down to a phone call or a Zoom discussion. And having a one-on-one meeting with the customer is by far the most useful kind of interaction in my experience.

    Other forms of customer interactions are usually by way of surveys, or may be in the form of feedback from other business departments (Customer Support, or Customer Success, or Sales typically) or from other situations such as conferences, meetups, webinars and the like.

    Depending on the technology stack, there’s also the possibility of feedback from within the product itself, by way of user-tracking or other forms of monitoring.

    So, signals from the customer (for me) look like this:

    • Direct interactions (meetings on the phone or Zoom)
    • Survey feedback (NPS or other survey types)
    • Feedback from busines channels
    • Event feedback
    • In product monitoring

    Competitors

    The next big area to try and understand is what competitors are doing in the marketplace.

    For me, this is even tricker and time-consuming than trying to elicit information from customers. Generally speaking, customers are pretty happy to tell you what they’re thinnking about and will often do so in no uncertain terms! Customers have a vested interest in improving the product, particularly if they have already parted with their cash.

    Competitors on the other hand – not so much! They will actively try to hide information so as not to broadcast their product strategy and intent.

    Fortunately, there are some relatively well establiushed mechanisms for analysing competitors in order to glean needed information. Some sources of useful data include the following:

    • Industry publications
    • Case studies
    • Corporate info aggregation sites such as Owler, or Hoovers
    • Press releases
    • Company blogs
    • The competitor product itself (through analysis or reverse engineering)

    Once all that information has been gathered, you can start to turn it into a SWOT (Strengths, Weaknesses, Opportunities, Threats) model. There’s any number of resources on the interweb about what a SWOT is and how to do it, so I won’t dwell to much on it here. Except to mention that once you’ve gathered the necessary SWOT information about any competitors, it can be a good idea to consolidate it into a single view of all that data, so that you can use it formulate attack and defend vectors, as well as to identify potential opportunities (per Steven Haines PM Desktop Reference):

    consolidation of SWOT data to identify attack and defend vectors

    Which makes a lot of sense to me, hence reproducing the model.

    Furthermore, Haines goes on to recommend an additional series of questions for delving deeper into competitor operations:

    • How is the competitor company operated?
    • How does the competitor actually produce their product?
    • Via what channels does the competitor distribute their product?
    • By what means does the competitor promote and sell their product?
    • How does the competitor service and support their customers?
    • What technologies are primarily used in the competitor product?
    • What does the employee situation or culture look like for the competitor?
    • What (if anything) are they communicating to any pertinent regulatory or government bodies?

    Other influences

    There’s a final area, other external influences, which warrants at least a little bit of attention. There’s not really too much I can say about this though, other than to pay attention to the world around your area of focus (remember the earlier narrowing down of that area) in as many ways as makes sense to you.

    Speaking personally, I’m a bit of an information hoover, and will suck up information from anywhere I can find it. But as mentioned previously, that comes at the risk of overwhelm. The challenge is knowing when to stop. Which is what, I hope, the development and application of my model will help me with – once I’ve refined it some more.

    Unfortunately, one thing it won’t help me with, is time.

    Specifically, finding time to do all of the research implied by the various activities above, while still delivering on all the other PM activities expected from me…

We use cookies in order to give you the best possible experience on our website. By continuing to use this site, you agree to our use of cookies.
Accept
Privacy Policy