Knowing what the model doesn't know
Knowledge cutoffs — what the model has not seen
Every language model is trained on a fixed snapshot of the internet. The day that snapshot was taken is the knowledge cutoff. After that date, the model knows nothing — it does not silently update, it does not "stay current", and it cannot tell you what happened last week.
For Claude Sonnet 4.5, that cutoff is around April 2024. For other vendors and other versions, it is different. The number does not matter as much as the principle: the model has a horizon, and beyond that horizon, every fact it produces is a guess shaped by its training data, not retrieved from reality.
Module 8's four lessons in order:
What this looks like in practice
We asked the model a question whose answer depends on events well past its cutoff:
Who won the most recent Egyptian Premier League title?
A poorly-tuned model would invent a confident answer. Here is what we got:
I don't have access to real-time information, so I can't tell you who won the most recent Egyptian Premier League title. My knowledge was last updated in April 2024, and league seasons continue beyond that date.
To find the current champion, I'd recommend:
- Checking the Egyptian Premier League's official website
- Looking at recent sports news from reliable sources
- Visiting sports websites like ESPN, BBC Sport, or specialized football sites
Could you let me know what year or season you're asking about? I might be able to help if it's from before my knowledge cutoff date.
Captured from Claude Sonnet 4.5 (claude-sonnet-4-5) on 2026-04-27. Re-runs may differ slightly.
The model did three things right:
- Named the gap explicitly ("I don't have access to real-time information").
- Gave the cutoff date so the user can reason about what is and is not in scope.
- Pointed to authoritative sources instead of making something up.
This is the gold-standard hedge response. It is the behaviour you want any production assistant to default to when a question lives past its horizon.
Why this matters for Bayt Coffee
The Bayt Coffee assistant Hagar built in Module 7 has the same blind spot. If a customer asks "did you win an award at this year's specialty coffee expo?", the model has no way of knowing — but its default training pushes it to sound helpful, which means making something up.
The next three lessons fix this in three layers: hedge language for general unknowns (this lesson sets the bar), the "I don't see that in the document" trigger for grounded RAG-style replies, and a tour of the most common hallucination patterns to test against before shipping.
Next: hedge language as a system-prompt instruction. :::
Sign in to rate