Skip to main content
Back to Frameworks
Growth · Scale-up

AI Use-Case Prioritisation Matrix

Every marketing team has a long list of potential AI use cases — from content generation to predictive analytics to personalisation engines. The problem is rarely a shortage of ideas but knowing which to invest in first. This matrix scores each use case across four dimensions: business impact, data readiness, technical feasibility, and ethical risk. The result is a prioritised roadmap that balances quick wins with strategic bets.

When to use this framework

  • Your team is exploring AI/ML for marketing and needs to prioritise initiatives
  • Leadership is asking for an AI roadmap or strategy
  • You have multiple AI vendor proposals and need an objective way to compare them
  • You want to identify quick wins before investing in larger AI projects
  • You need to assess whether your data infrastructure is ready for an AI initiative

Sign in to unlock full access

You're browsing as a visitor. Create a free account to fill in worksheets, download PDFs, and save your progress.

Worked Example

Sephora

1. Use Case Definition

Give the AI initiative a clear, descriptive name.

AI-Powered Product Recommendation Engine for Email Campaigns

What specific marketing problem does this solve? Be precise about the current pain.

Current email product recommendations are based on simple rules (bestsellers, category affinity) and show the same products to large segments. Open rates are 22% but click-through on product recs is only 1.8% — well below the 4%+ benchmark for personalised retail email. The merchandising team manually curates 6 product grids per week, taking 15+ hours.

What does success look like? Quantify where possible.

Personalised product recommendations based on browsing history, purchase history, skin profile, and beauty preferences. Target: 3.5%+ CTR on product recs (2x current), 10% uplift in email-attributed revenue, and 80% reduction in manual curation time.

2. Scoring Dimensions

Score each dimension 1-10. Multiply to get a composite score.

How much revenue, cost savings, or competitive advantage will this deliver? 1 = marginal improvement, 10 = transformative.

8

Do you have the data required? Is it clean, accessible, and sufficient? 1 = data doesn't exist, 10 = clean data pipeline already in place.

7

Can your team (or vendors) build this with current technology? 1 = cutting-edge R&D needed, 10 = off-the-shelf solution available.

8

What's the risk of bias, privacy issues, or brand damage? Score inversely: 1 = high risk (needs careful governance), 10 = minimal ethical concerns.

7

Business Impact × Data Readiness × Technical Feasibility × Ethical Risk. Higher = prioritise first.

3. Implementation Assessment

Should you build this in-house, buy a SaaS tool, or partner with a vendor?

hybrid

How long from kickoff to measurable results?

quarter

What could block or slow this? Data access, engineering resources, legal/privacy review, skills gaps.

1. Engineering team needs to build real-time data pipeline from browsing events to recommendation model (2-sprint dependency) 2. Legal review of using skin-type data in recommendations (1 week) 3. Beauty advisor team needs training on how to QA AI recommendations 4. Risk: Cold-start problem for new customers — need fallback logic for users with <5 browsing sessions

4. AI Governance Checklist

Does this use personal data? What consent is required? GDPR/CCPA implications?

Uses first-party data only (purchase history, browsing, explicitly provided beauty profile). No third-party data. GDPR basis: legitimate interest for existing customers, consent for prospects. Skin-type data is sensitive — legal has confirmed it's permissible when self-declared and used only for product relevance, not profiling. Data is pseudonymised in the ML pipeline.

Could this AI produce biased outputs against certain groups? How will you test and monitor?

Risk: Model could under-serve customers with darker skin tones if training data skews toward bestselling products (which historically over-index on lighter shades). Mitigation: Ensure training data is balanced across skin tone categories. Monthly audit of recommendation diversity by skin tone segment. Include shade-matching AI as a secondary signal.

What level of human review is required? Fully automated, human-in-the-loop, or human-on-the-loop?

on-the-loop
Your Worksheet

Fill in for your brand

Sign in to use this worksheet

Create a free account to fill in frameworks with your own brand details, download completed worksheets, and save your progress.

Take this framework offline

Download a blank PDF to fill in by hand, use it in workshops, or pin it to your wall. If you've filled in the worksheet above, you can download your completed version too.