Ethics, Limitations, and Best Practices
Understanding AI Limitations
Knowing what AI can't do is just as important as knowing what it can. This prevents wasted effort and potentially costly mistakes.
The Fundamental Limitation
AI doesn't "know" anything. It predicts likely text based on patterns. This means:
- It can generate plausible-sounding content that's completely wrong
- It can't verify facts or access real-time information
- It doesn't have experiences, opinions, or genuine expertise
- It can confidently present incorrect information
What AI Cannot Do Reliably
Accurate Facts and Data
The problem: AI can generate statistics, dates, quotes, and facts that don't exist or are incorrect.
Example:
Prompt: "What were Apple's Q3 2024 revenue numbers?"
AI might confidently state figures that are fabricated or outdated.
Solution:
- Never use AI-generated numbers without verification
- Ask AI to structure analysis, not provide the data
- Use AI to explain publicly known information, then verify
Recent Events
The problem: AI has a knowledge cutoff date. It doesn't know about events after its training data ended.
Example:
Prompt: "What happened at last week's industry conference?"
AI will either refuse, make things up, or confuse it with a past event.
Solution:
- Check the AI tool's knowledge cutoff date
- Provide recent context in your prompt when needed
- Use AI for analysis of information you provide, not discovery
Specialized or Technical Accuracy
The problem: AI often gets technical details wrong, especially in specialized fields.
Risk areas:
- Legal advice and compliance requirements
- Medical or health information
- Financial regulations
- Engineering specifications
- Industry-specific regulations
Solution:
- Use AI for drafts, never final technical content
- Always have domain experts review technical output
- Be especially cautious with anything that has legal/safety implications
Math and Calculations
The problem: Surprisingly, AI often makes basic math errors, especially with:
- Multi-step calculations
- Percentages and ratios
- Date calculations
- Compound operations
Example:
Prompt: "If we increase price by 15% then offer a 10% discount, what's the net change?"
AI might give the wrong answer or incorrect reasoning.
Solution:
- Use spreadsheets for calculations, not AI
- If AI does math, verify every calculation
- Ask AI to show its work so you can check logic
Consistency Across Long Documents
The problem: In long outputs, AI can contradict itself, lose track of details, or change tone mid-document.
Example:
- Character names changing in creative writing
- Contradictory recommendations in the same report
- Inconsistent formatting or terminology
Solution:
- Break long documents into sections
- Review outputs for internal consistency
- Use prompt chaining for complex documents
Hallucination: The Core Risk
"Hallucination" is when AI generates confident, detailed content that's entirely made up.
High Hallucination Risk Areas
| Area | Risk | Consequence |
|---|---|---|
| Citations and sources | Very High | Embarrassment, credibility loss |
| Quotes from people | Very High | Defamation risk, legal issues |
| Statistics and data | High | Bad decisions based on fake data |
| Historical details | High | Factual errors in content |
| Technical specifications | Medium-High | Product/safety issues |
Low Hallucination Risk Areas
| Area | Risk | Why |
|---|---|---|
| Writing style and tone | Low | No facts to get wrong |
| Structure and formatting | Low | Pattern-based, not fact-based |
| Brainstorming ideas | Low | Ideas, not claims |
| Explaining concepts | Medium | General knowledge usually OK |
The "Confident Incorrectness" Problem
AI doesn't express uncertainty like humans do. It presents everything with the same confidence level.
What humans do: "I'm not sure, but I think..." What AI does: "The answer is X." (even when wrong)
Your defense:
- Assume confidence doesn't indicate accuracy
- Verify anything that will be published or acted upon
- Ask AI: "What might be wrong with this?" to surface uncertainties
When to Be Extra Cautious
🚨 High stakes situations:
- Public-facing content (website, press releases)
- Legal or compliance documents
- Financial advice or analysis
- Medical or health content
- Content about real people or companies
- Anything with safety implications
Key Takeaway
AI is a powerful tool for generating drafts, exploring ideas, and structuring content — but it's not a source of truth. Treat AI output as a starting point that always requires human verification, especially for facts, figures, and specialized knowledge.
Next: Learn how to implement fact-checking and quality control for AI content. :::