⚡Ruthless Prioritization: Stop building. Start thinking.

Building isn’t the goal, impact is. Let’s stop shipping for the sake of it and focus on what truly matters.

Cintia Henriksson
4 min readMar 8, 2025

I was there. For a long time.

Lost in the vast ocean, battling urgent tasks, chasing endless releases.

And from the inside, everything felt important. Everything was a priority.

Until one day, I looked up. I saw the distance. I saw the horizon.

And I realized: I had been running in circles. 😕

If this text makes just one person stop, think, and wake up from that loop, then it’s worth it.

Because prioritization is not about doing lessit’s about doing what matters.

🎯 Step 1: Start with the end — What are we actually trying to achieve?

Forget the feature requests. Forget the roadmap. Forget “We need to launch this.”

🚫 Bad: “We need to improve onboarding.”
Good: “New users drop off after three days — we need to increase retention by 15%.”

If you don’t have absolute clarity on the real outcome, then you don’t have a priority. Period.

📊 Step 2: Test reality — Will this actually move the needle?

Most teams are stuck in “build mode” instead of “impact mode.”

Before you commit, ask yourself:

Does this drive a key metric? (Revenue, retention, engagement)
Can we measure it? (If not, how will we even know it worked?)
What happens if we don’t do it? (Does anything break? Or is it just nice to have?)

🔪 Brutal truth: If it’s a “nice-to-have” disguised as a “must-have,” kill it.

🧪 Step 3: Hypothesis-driven development — Learning before building.

📢 Here’s where most teams fail:
They spend months building something they could have validated in weeks.

So instead of committing to full development, we test before building.

🚀 Discovery phase — Where everything begins

Product, design, and engineering work together from day one.

We start by framing:

🔬 Hypothesis:
“If we add contextual nudges during onboarding, more users will complete key setup steps, increasing Day 3 retention by 15%.”

🛠 Experiment:

  • Test with an email campaign first (before coding anything).
  • Run a small-scale user test with a clickable prototype.
  • Hypothesize improvements based on customer interactions.
  • Prioritize the ones that actually contribute to validating the product.
  • Launch an A/B test at a time on a subset of users.

💡 Why?
Because building is expensive, but learning is cheap.

🔄 Iteration phase — Validate before scaling, but don’t give up too fast.

After running a quick validation, we analyze:

  • If retention increases, we scale.
  • If it doesn’t, we don’t just kill it immediately.

Here’s where most teams fail:

  • They see a test not working and immediately abandon the idea.
  • But you don’t just give up. You need to understand why.

If it doesn’t work, ask:
🔍 Did we execute it correctly?
🔍 Did we test with the right users?
🔍 Did we measure the right metric?

💡 The first version of an idea almost never works perfectly.
Refine. Learn. Iterate.

🔥 Step 4: Execution — Build only what matters & measure the impact.

Once we have evidence, we go into execution mode.

  • Product, design, and engineering work as one team. (No handoffs. Just collaboration.)
  • We ship the leanest version first. (No “nice-to-haves” creeping in.)
  • We track results in real-time. (If it’s not working, we pivot fast.)

💀 If the MVP delivers the impact, stop.
💡 If we need to tweak, iterate.

And this is where one more crucial rule comes in:

🎯 Step 5: Always have a plugin for measuring impact — And a way back.

⚠️ Do not launch anything without a way to measure success.

I can’t stress this enough. Why? Because I’ve been there.

This means:
🔹 Set up tracking before launching. (So you don’t just “hope” for results.)
🔹 Make sure you can revert fast. (If it flops, don’t take weeks to undo it.)
🔹 Check early, but don’t overreact. (Sometimes, impact takes time.)

🚨 Key rule: If something isn’t working, don’t just kill it. First, understand why.

Maybe users didn’t notice it? (Visibility problem)
Maybe they don’t trust it? (UX issue)
Maybe the timing was wrong? (Behavioral factor)

💡 Only kill an initiative when you KNOW it’s fundamentally flawed — not just because it didn’t work instantly.

🔄 Real example: What this looks like in a product cycle.

Let’s say you’re a B2C PM and you see customers dropping on the checkout.

🔍 Discovery phase:

Hypothesis: Users abandon the checkout process because they don’t trust the payment flow.

Quick validation:

  • User interviews to identify trust concerns.
  • Data deep-dive to find where drop-offs happen.
  • Lightweight test: Display a secure checkout badge on a small group of users — does conversion improve?

🚀 Iteration phase:

Based on findings, you build a low-code, quick test:

  • Add a progress indicator in checkout.
  • Launch an A/B test with a variation that emphasizes security (e.g., “100% money-back guarantee” badge).
  • Measure impact on cart abandonment & purchase conversion.

🔥 Execution phase:

  1. If it works → Rollout to all users.
  2. If it doesn’t → Iterate or pivot.

💡 And always have a way back. No full build until we know it works.

This is how real prioritization works.

⚔️ Final rule: If everything is a priority, nothing is.

If you can’t explain in one sentence why something matters, it doesn’t.
If you’re still debating, you don’t have clarity.
If it doesn’t change user behavior, it’s noise.

Prioritization is not a list. It’s a choice.

Make it count. 🫡

🔗 Enjoyed this? Share it with someone who needs to hear this today.

#productmanagement #prioritization #leanstartup #impactnotoutput

Free

Distraction-free reading. No ads.

Organize your knowledge with lists and highlights.

Tell your story. Find your audience.

Membership

Read member-only stories

Support writers you read most

Earn money for your writing

Listen to audio narrations

Read offline with the Medium app

--

--

Cintia Henriksson
Cintia Henriksson

Written by Cintia Henriksson

Product Management, Tech, Visual Arts, Science, Philosophy, Nature and Martial Arts enthusiast

No responses yet

Write a response