AI Ethics, Governance & Your Career

AI Regulations for PMs

5 min read

AI regulation is no longer future—it's now. The EU AI Act is law, and other regions are following. Here's what you need to know.

The Regulatory Landscape (2025)

Region Status Key Regulation
European Union In force EU AI Act
United States State-by-state + Executive Orders Various state laws, NIST AI RMF
United Kingdom Framework AI Safety Institute guidance
China In force Generative AI regulations
Canada Proposed AIDA (Artificial Intelligence and Data Act)

EU AI Act: What PMs Must Know

The Risk-Based Framework

The EU AI Act categorizes AI by risk level:

Risk Level Examples Requirements
Unacceptable Social scoring, manipulative AI Banned
High Hiring, credit, healthcare, education Strict compliance
Limited Chatbots, emotion recognition Transparency
Minimal Spam filters, recommendations No specific requirements

High-Risk AI Requirements

If your AI falls in the high-risk category, you need:

Requirement What It Means
Risk management Document and mitigate AI risks
Data governance Training data must be relevant, representative, error-free
Technical documentation Detailed system description
Record keeping Log AI decisions for traceability
Transparency Clear information to users
Human oversight Humans can intervene/override
Accuracy & robustness Meet defined performance standards

Is Your AI High-Risk?

Answer these questions:

  1. Does it make decisions about people's:

    • Employment or recruitment?
    • Creditworthiness or loans?
    • Education or training access?
    • Essential services access?
    • Law enforcement or immigration?
  2. Is it a safety component in:

    • Medical devices?
    • Transportation systems?
    • Critical infrastructure?

If yes to any: Likely high-risk. Consult legal.

GDPR Implications for AI

GDPR already applies to AI that processes personal data:

GDPR Requirement AI Implication
Lawful basis Need legal ground to use data for AI
Purpose limitation Can't repurpose training data freely
Data minimization Only collect what's necessary
Right to explanation Users can ask how decisions were made
Right to object Users can opt out of automated decisions
Right not to be subject to automated decisions Significant decisions need human involvement

Article 22: Automated Decision-Making

Key restrictions:

  • Right to human review for decisions with legal/significant effects
  • Right to explanation of the logic involved
  • Right to contest the decision

PM Action: Ensure appeals process exists for AI-driven decisions.

Compliance Checklist for PMs

Before Development

  • Classify AI risk level (unacceptable/high/limited/minimal)
  • Identify applicable regulations (EU AI Act, GDPR, local laws)
  • Consult legal/compliance team
  • Document intended use and limitations

During Development

  • Training data documented and audited
  • Model testing includes fairness evaluation
  • Technical documentation maintained
  • Human oversight mechanisms designed

Before Launch

  • Risk assessment completed
  • User disclosure/transparency in place
  • Appeals/override process implemented
  • Logging and audit trails enabled

After Launch

  • Ongoing monitoring active
  • Incident response plan ready
  • Regular compliance audits scheduled
  • User complaint handling process defined

Common Compliance Mistakes

Mistake Why It's Risky Fix
"We're not in EU, so EU AI Act doesn't apply" Applies if serving EU users Know your user geography
"It's just recommendations, not decisions" Recommendations can have significant effects Assess actual impact
"Users agreed to ToS" Consent doesn't override all requirements Compliance is still needed
"We'll add compliance later" Retrofitting is expensive Build in from start

Working with Legal/Compliance

  1. "What risk category does our AI feature fall into?"
  2. "What documentation do we need to create?"
  3. "Do we need impact assessments?"
  4. "What user rights must we support?"
  5. "What happens if we get it wrong?"
  • Clear description of AI functionality
  • Data sources and processing
  • Decision scope and impact
  • User interaction points
  • Geographic scope

Future-Proofing Your AI Products

Regulations will only increase. Build for compliance:

Principle Implementation
Transparency by design Explainability from day one
Privacy by design Minimize data, anonymize when possible
Auditability Comprehensive logging
Human oversight Override capabilities built in
Documentation Continuous, not retrofit

Key Takeaway

Regulation is a product requirement, not a legal afterthought. The EU AI Act sets the global baseline. Build compliance into your product from the start—it's cheaper and safer than fixing it later.


Next: How do you grow your career as an AI Product Manager? Let's explore the path forward. :::

Quiz

Module 4: AI Ethics, Governance & Your Career

Take Quiz