product-strategy
Senior product management thinking for vision setting, prioritization, roadmapping, and stakeholder alignment. Covers opportunity sizing, RICE/Kano/ICE frameworks, OKR writing, product-market fit signals, and roadmap communication strategies. Trigger phrases: product roadmap, feature prioritization,
Product Strategy
Product strategy is the discipline of making coherent, defensible choices about what your product will — and won't — do. Most teams confuse tactics (features) for strategy (the coherent set of decisions that create a durable competitive advantage). This skill covers the frameworks PMs at top companies use to think clearly about product direction and communicate it effectively.
Core Mental Model
Vision → Strategy → Roadmap → Execution. These are distinct layers:
- Vision: The world you're trying to create. 10-year aspiration. Rarely changes.
- Strategy: The theory of how you win given your constraints. Which problems, which customers, which capabilities. Changes every 12-18 months.
- Roadmap: The prioritized sequence of investments. Changes quarterly.
- Execution: Sprint plans, tickets, releases. Changes weekly.
The product manager's job is to maximize the probability that the team builds the right thing. That means ruthless prioritization, not feature maximization.
Vision vs Strategy vs Roadmap
Vision (10-year aspiration)
Format: "We believe [customer] should be able to [do X]
without having to [deal with painful thing]."
Amazon: Every product on earth, delivered in two days.
Stripe: Increase the GDP of the internet.
Slack: Make work life simpler, more pleasant, more productive.
Anti-pattern: "Be the leading platform for [our category]."
→ This is a business goal, not a vision. It says nothing
about the customer or the world you want to create.
Strategy (18-month theory of winning)
A good product strategy answers:- Who are we primarily serving? (Which customer segment is our beachhead?)
- What problem are we solving better than anyone else?
- Why us — what unfair advantage do we have?
- What we're NOT doing (the hardest part)
Example:
Who: Early-stage B2B SaaS founders (< 20 employees)
What: Payment integration without needing backend engineers
Why us: Deepest API + best docs + fastest time to revenue
Not: Enterprise compliance, marketplace payments, in-person POS
Roadmap (quarterly investment plan)
Now/Next/Later beats date-based roadmaps every time for early-stage products.NOW (current quarter — high confidence)
✓ One-click payment links
✓ Revenue dashboard
NEXT (next quarter — medium confidence)
→ Subscription billing
→ Dunning management
LATER (backlog — directional only)
→ Multi-currency support
→ Marketplace splits
Opportunity Sizing
Size markets before committing engineering time. Use TAM/SAM/SOM as a sanity check, not a precision exercise.
TAM (Total Addressable Market) — everyone who COULD buy
SAM (Serviceable Addressable) — those your model can reach
SOM (Serviceable Obtainable) — realistic 3-year capture
Example: Project management software
TAM: All knowledge workers globally = $50B
SAM: SMBs in North America using cloud tools = $8B
SOM: 2% capture in 3 years = $160M ARR potential
Rule of thumb: If your SOM < $10M, it's either too niche or too early.
If someone tells you TAM is $1T, ask for the SAM.
Jobs-to-be-Done Framing for Opportunities
JTBD reframes opportunity from "features users want" to "jobs users hire products to do."Wrong: "Users want a report export feature."
JTBD: "When I finish an analysis, I need to share findings with my
boss in a format she can present to the board."
Wrong: "Users want dark mode."
JTBD: "When I work late at night, I need to reduce eye strain
so I can focus for longer sessions."
Jobs have three components:
- Functional: What they're trying to accomplish
- Emotional: How they want to feel (or avoid feeling)
- Social: How they want to be perceived
Prioritization Frameworks
RICE Scoring
RICE = (Reach × Impact × Confidence) / Effort
Reach: Users impacted per quarter (raw number)
Impact: Massive=3, High=2, Medium=1, Low=0.5, Minimal=0.25
Confidence: High=100%, Medium=80%, Low=50%
Effort: Person-months (fractional OK)
Example: Onboarding flow improvement
Reach: 500 users/quarter affected
Impact: 2 (high — affects activation)
Confidence: 80% (we have qualitative evidence)
Effort: 2 person-months
RICE = (500 × 2 × 0.8) / 2 = 400
Run RICE on 10-20 candidates, sort descending,
top of list is your next bet.
RICE limitations: Garbage-in-garbage-out. Your estimates are guesses. Use RICE to structure debate, not end it.
Kano Model
Categorizes features by their satisfaction/delight curve:
Basic (Must-Have): Absence causes dissatisfaction; presence = neutral
Examples: Mobile app performance, no data loss, password reset
Linear (Performance): More = better satisfaction proportionally
Examples: Load speed, storage, number of integrations
Delighter (Excitement): Unexpected; absence = neutral; presence = delight
Examples: AI auto-categorization, personalized insights, magic moments
Indifferent: Users don't care either way
Reverse: Feature actually decreases satisfaction for some users
Kano survey:
- "If you had [feature], how would you feel?" (Functional: Like/Expect/Don't care/Live with/Dislike)
- "If you did NOT have [feature], how would you feel?" (Dysfunctional: same scale)
- Cross-reference to categorize
ICE Scoring (lightweight alternative)
ICE = Impact × Confidence × Ease (all 1-10)
Use when you need fast-pass prioritization without detailed research.
Best for: hackathon-style sprint planning, startup with thin data.
Weakness: More subjective than RICE, easier to game.
OKR Writing
OKRs fail because teams write them wrong. The most common mistake: output-oriented KRs.
Objective (the aspiration)
- Inspiring, qualitative, time-bound (quarterly)
- Answers: "What do we want to achieve this quarter?"
- Should be slightly uncomfortable — if you're 100% sure you'll hit it, it's not ambitious enough
Key Results (the evidence)
- Measurable outcomes, not outputs
- 3 KRs per objective max
- Lead indicators preferred over lag indicators
BAD OKR:
O: Improve our checkout experience
KR1: Launch redesigned checkout flow ← OUTPUT (did we ship it)
KR2: Implement address autocomplete ← OUTPUT
KR3: Add Apple Pay support ← OUTPUT
GOOD OKR:
O: Make checkout the fastest in our category
KR1: Checkout completion rate → 78% (from 64%)
KR2: Time-to-purchase < 90 seconds (from 140s)
KR3: Cart abandonment rate < 22% (from 31%)
OKR grading: 0.7 is success. 1.0 means you weren't ambitious enough. 0.3 or below is a learning opportunity — document why.
Product-Market Fit Signals
PMF is not a moment — it's a zone you enter when retention curves flatten.
Retention Curve Analysis
Plot: % of users who return on Day 1, 7, 14, 30, 60, 90
PMF negative: Curve declines to 0% (you're bleeding all users)
PMF emerging: Curve flattens at 5-15% (core audience found it)
PMF positive: Curve flattens at 25%+ (depends on category)
Consumer social: PMF at 25%+ D30 retention
B2B SaaS: PMF at 40%+ M3 retention (monthly active accounts)
Marketplace: Look at both supply and demand retention separately
Sean Ellis Survey
Ask active users: "How would you feel if you could no longer use [Product]?"- Very disappointed
- Somewhat disappointed
- Not disappointed (not really that useful)
Benchmark: >40% "Very disappointed" = approaching PMF
<40% = find the segment that IS very disappointed and focus there
Follow-up: "What type of person would benefit most from [Product]?"
→ Users who answer "Very disappointed" often describe themselves
in ways that reveal your real customer persona
NPS as a Signal
- NPS > 50: Strong PMF
- NPS 30-50: Growing retention
- NPS < 30: Serious retention problem
- Track cohort NPS over time — improvement matters more than absolute number
Stakeholder Alignment
The Stakeholder Map
Executive sponsor: Needs vision + business metrics. Monthly update.
Engineering partner: Needs technical feasibility input early. Weekly.
Design partner: Needs user research + north star. Weekly.
Sales/CS: Needs near-term roadmap for customer conversations. Quarterly.
Customers: Need external-facing roadmap (curated). Quarterly.
Managing Up — Anticipate Concerns
Before presenting a roadmap to leadership, role-play their questions:- CEO: "Why isn't [competitor feature] on the roadmap?"
- CRO: "How does this help the sales team close deals?"
- CTO: "What's the technical debt tradeoff here?"
- CFO: "What's the expected ROI and by when?"
Customer Advisory Board
5-10 power users who meet quarterly. Rules:- They advise — you decide. Never promise to build what CAB suggests.
- Show them problems, not solutions: "We're seeing X pain — how do you experience it?"
- Their feedback is qualitative signal, not statistical proof
- Compensate with early access, not cash (cash creates bias)
Saying No — With Data
Framework for declining a feature request:
1. Acknowledge the pain: "I hear that [pain] is real — we've seen it in X% of support tickets."
2. Explain the tradeoff: "Building this would mean delaying [higher priority] by N weeks."
3. Show your reasoning: "Our data shows [higher priority] affects 3x more users."
4. Offer an alternative: "Here's how users currently solve this / here's what we're doing instead."
5. Keep the door open: "We've logged this — if our data changes, it moves up."
Never say: "That's a great idea!" (if you won't build it)
Never say: "That's not on the roadmap" (without explaining why)
Anti-Patterns
❌ Roadmap as commitment list — Dates on every item turns the roadmap into a contract that strangles the team when reality changes.
❌ HiPPO-driven prioritization — "Highest Paid Person's Opinion" wins. Kills data culture. Frame disagreements as "what evidence would change your mind?"
❌ Output OKRs — KRs that say "ship X" instead of measuring user outcomes are just project plans disguised as OKRs.
❌ Copying competitors feature-for-feature — This is a losing strategy. By the time you ship their feature, they've moved on.
❌ No "not doing" list — Roadmaps without explicit non-goals create scope creep. Write down what you're deliberately NOT building.
❌ Separate internal and external roadmaps too divergent — You'll lose credibility. External roadmap should be a curated subset, not a fabrication.
Quick Reference
RICE Template
Feature: [Name]
Reach: [N users/quarter]
Impact: [3/2/1/0.5/0.25]
Confidence: [100/80/50%]
Effort: [person-months]
RICE Score: (Reach × Impact × Confidence) / Effort = [X]
OKR Writing Test
- [ ] Objective is aspirational and slightly uncomfortable?
- [ ] Each KR measures an outcome, not an output?
- [ ] 3 or fewer KRs per objective?
- [ ] KRs are measurable with a clear baseline?
- [ ] You'd be proud AND surprised to hit 1.0?
Roadmap Format Comparison
| Format | Best for | Worst for |
| Now/Next/Later | Early stage, fast-moving | Sales needing date commitments |
| Timeline | Coordinated launches, enterprise | Startups (false precision) |
| Outcome-based | OKR-aligned orgs | Teams that need tactical clarity |
| Feature list | Internal sprint planning | External stakeholder comms |
Skill Information
- Source
- MoltbotDen
- Category
- Product & Design
- Repository
- View on GitHub
Related Skills
engineering-management
Senior engineering management practices for tech leads and managers. Covers 1:1 structure, SBI feedback model, structured hiring, team health metrics, technical debt communication, estimation techniques, managing up, IC vs management track, and psychological safety. Trigger phrases: engineering mana
MoltbotDenfigma-expert
Professional-grade Figma workflow for product designers and design system builders. Covers auto layout, component architecture, design tokens with Variables, prototyping, developer handoff, design system governance, and productivity plugins. Trigger phrases: Figma component, auto layout, design toke
MoltbotDenprd-writing
Expert product requirements document writing for shipping high-quality features. Covers PRD structure, user story format with acceptance criteria, problem framing techniques, requirement types, edge cases, design review, success metrics, and keeping PRDs as living documents. Trigger phrases: write a
MoltbotDenproduct-analytics
Data-driven product decision making with expert-level analytics methodology. Covers north star metrics, funnel and cohort analysis, A/B testing, event taxonomy design, attribution modeling, session recording patterns, and the balance between data-informed and data-driven decisions. Trigger phrases:
MoltbotDentechnical-writing
Expert-level technical writing covering docs-as-code workflow, the Divio documentation system, API documentation, README structure, Architecture Decision Records, changelog conventions, writing for scanning, code example quality, and self-review checklists.
MoltbotDen