• Post author:
  • Reading time:8 mins read
You are currently viewing Scoring Frameworks: ICE, RICE, and Weighted Scoring for Product Prioritization

Prioritization techniques like MoSCoW and Impact/Effort (covered in Part 1) work for initial filtering, but they don’t produce comparable scores across items or account for multiple factors systematically. Scoring frameworks solve this by assigning numerical values based on defined criteria, creating repeatable prioritization without requiring financial data.

This guide covers three widely-used frameworks: ICE scoring, RICE scoring, and Weighted Scoring models.

The Scenario

To show how framework choice affects outcomes, we’ll apply ICE, RICE, and Weighted Scoring to the same decision.

Your context: VP of Product at a streaming service with 500,000 subscribers at $10/month = $5M monthly revenue.

Your goal: Increase monthly revenue from $5M to $7M (40% growth) within one year.

You have capacity for one major feature this quarter. Two features are competing:

Feature A: Watch Parties

  • Watch shows/movies together remotely with video chat and synchronized playback
  • Available to all subscribers
  • 3 person-months to build

Feature B: Premium Tier

  • New $15/month tier with 4K streaming, early access to new releases, and expanded downloads
  • Estimated 10% of subscribers will upgrade (industry benchmarks)
  • 2 person-months to build

The trade-off: Watch Parties help all customers through social engagement. Premium Tier directly increases revenue per customer but only for those who upgrade.

The question: Which framework produces the most useful ranking for your situation?

ICE Score: Fast Prioritization Technique

ICE was created by Sean Ellis to prioritize growth experiments at companies like Dropbox and LogMeIn. It provides a “minimum viable prioritization framework” for teams that need to move quickly.

How ICE Scoring Works

ICE scores three factors on a 1-10 scale: Impact, Confidence, and Ease. Multiply them together. Higher scores get priority.

Impact: The potential effect on your goal (revenue growth, conversion rate, customer satisfaction).

Confidence: How certain you are about your impact estimates. High confidence means data or precedent supports your assessment. Low confidence means educated guessing. Confidence is about impact certainty, not implementation difficulty.

Ease: Implementation simplicity. High ease means small effort and low complexity. Low ease means significant work or technical challenges.

The confidence factor distinguishes ICE from most methods. Many frameworks ignore uncertainty. ICE makes it explicit.

ICE Scoring: The Decision

FeatureImpact
(1-10)
Confidence
(1-10)
Ease
(1-10)
ICE ScorePriority
Watch Parties5641202nd
Premium Tier8874481st

Calculation: Impact × Confidence × Ease

Watch Parties:

  • Impact: 5 – Moderate contribution to revenue growth. Brings new subscribers through viral sharing and reduces churn through social engagement, but the path to revenue is indirect and uncertain.
  • Confidence: 6 – Proven adoption (Disney+, Hulu data), but moderate confidence about business impact – uncertain how engagement translates to measurable revenue.
  • Ease: 4 – 3 months. Real-time video chat, playback sync, latency handling. Significant complexity.

Premium Tier:

  • Impact: 8 – Strong contribution to revenue growth. Direct revenue increase of $250K monthly (50K upgrades × $5) = 12.5% of the $2M goal. Also brings quality-focused new subscribers.
  • Confidence: 8 – Proven model (Netflix, Disney+, HBO Max). Industry benchmarks support the 10% upgrade estimate.
  • Ease: 7 – 2 months. Infrastructure setup and 4K transcoding. Straightforward execution.

ICE ranks Premium Tier first. Score is 448 vs 120.

RICE Score: Prioritization Framework for Scale

RICE was developed by Intercom’s product team as they struggled with competing ideas and inconsistent prioritization. They needed a framework that considered how many customers would be affected, balanced against confidence and cost.

How RICE Scoring Works

RICE evaluates four factors: Reach, Impact, Confidence, and Effort. The formula is (Reach × Impact × Confidence) ÷ Effort.

Reach: How many people the initiative affects within a time period (customers per quarter, users per month). Makes scale explicit.

Impact: Fixed scale measuring impact per customer: 3 (massive), 2 (high), 1 (medium), 0.5 (low), 0.25 (minimal). Forces precise differentiation.

Confidence: Expressed as a percentage. Measures certainty about your Reach and Impact estimates.

Effort: Measured in person-months. Reduces subjectivity compared to ease scoring.

RICE Scoring: The Decision

FeatureReach (customers/quarter)ImpactConfidenceEffort (person-months)RICE ScorePriority
Watch Parties500,0002 (High)60%3200,0001st
Premium Tier50,0002 (High)80%240,0002nd

Calculation: (Reach × Impact × Confidence) ÷ Effort

Watch Parties:

  • Reach: 500,000 – All subscribers can use it. Universal feature.
  • Impact: 2 (High) – High impact per customer. Moderate contribution to revenue growth through new subscribers (viral sharing) and reduced churn (social engagement), but indirect path to revenue.
  • Confidence: 60% – Proven adoption (Disney+, Hulu data), but moderate confidence about business impact – uncertain how engagement translates to measurable revenue.
  • Effort: 3 person-months – Real-time video chat, playback sync, latency handling. Significant complexity.

Premium Tier:

  • Reach: 50,000 – 10% of subscribers will upgrade (industry benchmarks). Only power users who value premium features.
  • Impact: 2 (High) – High impact per customer who upgrades. Strong contribution to revenue growth through direct upsell ($250K monthly = 12.5% of $2M goal) plus attracting quality-focused new subscribers.
  • Confidence: 80% – Proven model (Netflix, Disney+, HBO Max). Industry benchmarks support the 10% upgrade estimate.
  • Effort: 2 person-months – Infrastructure setup and 4K transcoding. Straightforward execution.

RICE ranks Watch Parties first. Score is 200,000 vs 40,000.

Weighted Scoring: Custom Prioritization for Complex Decisions

Unlike ICE and RICE with fixed formulas, Weighted Scoring is fully customizable. You create custom evaluation criteria with assigned importance weights.

How Weighted Scoring Works

  1. Identify four to six criteria reflecting your priorities
  2. Assign weights based on relative importance (must sum to 100%)
  3. Score items on each criterion using a 1-5 scale
  4. Calculate weighted totals by multiplying each score by its weight and summing

Weighted Scoring: The Decision

The leadership team defined criteria reflecting strategic priorities for reaching $7M revenue. All three pathways contribute to revenue growth, but retention is weighted highest as the foundation of subscription business success.

Criteria and Weights:

CriterionWeightRationale
New Customer Acquisition30%Growing subscriber base is essential for reaching $7M
Customer Retention40%Retention is the foundation of subscription revenue
Revenue Increase30%Monetizing existing customers accelerates growth

Scoring (1-5 scale):

FeatureAcquisition
(30%)
Retention
(40%)
Revenue
(30%)
Weighted
Score
Priority
Watch Parties4 (1.20)5 (2.00)3 (0.90)4.101st
Premium Tier3 (0.90)3 (1.20)5 (1.50)3.602nd

Calculation: Sum of (Score × Weight) for each criterion

Watch Parties:

  • Acquisition: 4 – Viral growth loop as friends invite friends to watch together.
  • Retention: 5 – Social engagement is the strongest retention driver. Users invested in watching with friends are significantly less likely to cancel.
  • Revenue: 3 – Increased engagement contributes to revenue but no direct monetization increase per customer.

Premium Tier:

  • Acquisition: 3 – Appeals to quality-focused users seeking premium features. Attracts a segment but not a broad driver.
  • Retention: 3 – Better quality reduces churn for power users who upgrade.
  • Revenue: 5 – Direct upsell from $10 to $15. 50K upgrades add $250K monthly revenue.

Weighted Scoring ranks Watch Parties first. Score is 4.10 vs 3.60.

Note on effort: This weighted scoring model focuses purely on impact – what value each feature creates. Effort and complexity are considered separately after ranking by using techniques like Impact/Effort matrices from Part 1 to make the final prioritization decision considering both value and cost.

The contrast:

  • ICE ranks Premium Tier first – direct impact + execution simplicity matters in multiplication
  • RICE ranks Watch Parties first – customer reach matters in the formula
  • Weighted ranks Watch Parties first – retention matters in the weights

Same features, three different rankings because each framework embeds different assumptions.

The transparency benefit: During roadmap debate, Growth argued for Watch Parties (viral acquisition) while Finance argued for Premium Tier (direct revenue). Weighted Scoring resolved the argument by making strategic priorities explicit. Everyone agreed to weights upfront. The math became the ranking mechanism. Challenges shift from arguing about features to discussing strategic priorities: “Should retention be 40% or 20%?”

Framework Comparison: At a Glance

FrameworkFormulaComplexityTime to ScoreBest ForKey StrengthMain Limitation
ICEImpact × Confidence × EaseLowFastestGrowth experiments, marketing campaignsMakes uncertainty explicitIgnores scale/reach
RICE(Reach × Impact × Confidence) ÷ EffortMediumSlowerProduct features with varying reachExplicitly accounts for scaleRequires reach data
Weighted ScoringCustom: Σ(Score × Weight)HighSlowestPortfolio decisions, governance reviewsFully customizableSetup overhead

Scoring frameworks provide structure without requiring financial data. They make prioritization transparent and repeatable while remaining practical for ongoing backlog management.

Remember that scores provide ranking, not build/don’t-build decisions. A high score tells you which items rank above others for further consideration, not whether any item warrants investment. Strategic imperatives, resource constraints, and business context still matter.

ICE prioritizes quickly with explicit confidence. RICE adds scale through reach for user-centric decisions. Weighted Scoring allows full customization for complex multi-factor decisions.

Use these as tools for structured thinking, not as replacements for judgment. Most teams will spend most of their time combining basic techniques like MoSCoW and Impact/Effort with these scoring frameworks for more nuanced decisions.

The best framework is one your team will use consistently. Sophisticated models requiring hours won’t get used. Simple frameworks fitting existing planning rhythms will.

If your team needs hands-on guidance implementing these frameworks, our Prioritization Workshop focuses specifically on applying ICE, RICE, weighted scoring and other techniques in your organization. For broader product development capability building, the Building Innovative Products workshop covers prioritization alongside product discovery and validation methods.

Next in this series: We’ll cover flow-based methods like WSJF and Cost of Delay for throughput optimization, followed by financial techniques like ROI and NPV for investment decisions requiring executive approval.