Building AI into Your Product Strategy

Working with ML Engineering Teams

5 min read

ML engineers think differently than traditional software engineers. Understanding their world helps you collaborate effectively.

The PM-ML Communication Gap

What You SayWhat They HearBetter Way to Say It
"Can we make this smarter?"Vague, undefined request"Can we improve precision from 85% to 90%?"
"Why is this taking so long?"They're slow"What are the blockers? How can I help unblock?"
"Users want better results"Undefined success"Users report 20% of recommendations are irrelevant"
"Just use AI for this"Naively simple"Would ML be appropriate here? What would it take?"

Key ML Terms for PMs

Model Metrics

TermDefinitionWhy It Matters
PrecisionOf all predicted positives, how many were correct?High precision = fewer false alarms
RecallOf all actual positives, how many did we catch?High recall = fewer missed cases
F1 ScoreBalance of precision and recallOverall model quality
AccuracyOverall correctnessCan be misleading with imbalanced data

PM Rule: You usually can't maximize both precision AND recall. Know which matters more for your use case.

Data Concepts

TermDefinitionWhy It Matters
Training dataData used to teach the modelQuality in = quality out
Validation dataData to tune the modelPrevents overfitting
Test dataData for final evaluationNever seen during training
Data driftWhen real data changes over timeModel performance degrades
Ground truthThe correct labelsDefines what "right" means

Development Stages

StageWhat HappensTypical Duration
Data explorationUnderstand what data exists1-2 weeks
Feature engineeringCreate inputs for the model2-4 weeks
Model trainingTeach the model1-2 weeks
EvaluationTest model quality1 week
DeploymentPut into production1-2 weeks
MonitoringTrack ongoing performanceOngoing

What ML Teams Need from PMs

1. Clear Problem Definition

Bad: "We need AI to improve search"

Good:

  • "Users abandon search when no results match their intent"
  • "Success = users click on a result within top 5"
  • "Current baseline: 60% click-through on top 5"
  • "Target: 80% click-through on top 5"

2. Labeled Data (or Help Getting It)

ML models need examples of "correct" answers.

How you can help:

  • Provide historical data with outcomes
  • Set up labeling processes
  • Define edge cases and how to handle them
  • Prioritize data quality initiatives

3. Realistic Timelines

ML development is iterative, not linear.

PhaseWhat Can Go WrongBuffer
Data preparationData is messier than expected2x time
Model developmentFirst model doesn't work3-5 iterations
Production deploymentIntegration challenges2x time

4. Tolerance for Uncertainty

ML can't guarantee outcomes. Help stakeholders understand:

  • "We'll aim for 90% accuracy but may land at 85%"
  • "First version will be MVP, we'll iterate"
  • "We need production data to truly optimize"

Questions to Ask in ML Kickoffs

About data:

  • "What data do we need that we don't have?"
  • "How much labeled data exists?"
  • "What's our data refresh strategy?"

About approach:

  • "Are we using pre-trained models or training from scratch?"
  • "What's the simplest baseline we can compare against?"
  • "What are the biggest technical risks?"

About success:

  • "What accuracy can we realistically expect?"
  • "How will we know if the model is degrading?"
  • "What's our rollback plan?"

Red Flags in ML Projects

Red FlagWhat It MeansAction
"We need more data" (repeatedly)Fundamental problem with approachRe-evaluate if ML is right
"The model works in testing but not production"Data drift or training issuesInvestigate data pipeline
No baseline comparisonCan't prove ML adds valueEstablish baseline first
"Just a few more weeks"Scope creep or stuckSet hard deadline, ship MVP
Accuracy keeps changingUnstable modelReview evaluation methodology

Setting Up ML Projects for Success

Pre-Kickoff Checklist

  • Problem defined with measurable success criteria
  • Data inventory completed (what exists, what's missing)
  • Baseline established (current solution performance)
  • Timeline includes buffer for iteration
  • Stakeholders aligned on uncertainty

During Development

  • Weekly syncs on progress and blockers
  • Review intermediate results (don't wait for final)
  • Adjust scope based on learnings
  • Document decisions and trade-offs

Post-Launch

  • Monitoring dashboards in place
  • Alert thresholds defined
  • Feedback loop established
  • Retraining schedule planned

Key Takeaway

ML teams are partners, not implementers. Invest in understanding their constraints and providing clear requirements. The best AI products come from PM-ML collaboration, not handoffs.


Up next: Test your understanding with Module 2 Quiz. :::

Quick check: how does this lesson land for you?

Quiz

Module 2: Building AI into Your Product Strategy

Take Quiz
FREE WEEKLY NEWSLETTER

Stay on the Nerd Track

One email per week — courses, deep dives, tools, and AI experiments.

No spam. Unsubscribe anytime.