Diagnosing bad outputs
When to Start Clean
The previous three lessons argued for repair over restart. That's the right default — but it's not absolute. There are three specific situations where starting a fresh chat is the right call. Recognising them is part of the skill.
Three legitimate reasons to start over
1. The chat is polluted
You've sent five follow-ups. The model is now mashing together corrections from earlier in the conversation, contradicting itself, or stuck on a wrong assumption it formed in message 2.
Symptoms:
- The output keeps drifting back to a clichéd tone you corrected three turns ago.
- The model references a constraint you've since dropped.
- Asking it to "ignore previous instructions and just X" makes things weirder, not better.
Fix: open a new chat. Take what you learned about the right slot values and write a single clean prompt that bakes them all in. You'll spend two extra minutes and save twenty.
2. Your original prompt was wrong at the foundation
Sometimes the first follow-up you write makes you realise the original task was framed wrong. You asked for a code review when you needed an architecture decision. You asked for an email when you needed a Slack message. You asked for a summary when you needed a transcript clean-up.
Symptoms:
- You can't write a single follow-up that fixes things — you'd need to change the goal.
- Every diagnostic question gets a "well, kind of, but actually..."
- The chat history doesn't help you because the task itself needs to change.
Fix: open a new chat. Write the prompt for the task you actually meant.
3. Context window is full or session is too old
For long sessions, the model's context window fills up. Old messages get pushed out, behaviour gets erratic, and even good follow-ups stop landing reliably. Modern models have huge context windows in 2026, but it still happens — especially with long documents pasted in.
Symptoms:
- The model "forgets" something you said earlier.
- Latency spikes.
- The model contradicts a constraint that's still on screen.
Fix: open a new chat. Re-summarise the relevant context in 5 lines and continue.
When to repair vs when to restart
Repair (in chat)
- Targeted, short
- Keeps everything you fixed
- Builds the iteration muscle
Restart (new chat)
- Cuts losses on a tangled thread
- Clean slate when goal changed
- Forces you to consolidate learnings
A practical heuristic
Use this rule of thumb:
If your follow-up prompt would be longer than your original prompt, you should probably restart.
The whole point of follow-ups is that they're short and targeted. If you find yourself rewriting half the original prompt to "fix" the chat, you're not iterating — you're doing a worse version of restarting.
What you take with you when you restart
The crucial habit: even when you restart, you take learnings with you. The bad output told you something. Use it.
| Bad output told you | What to bake into the new prompt |
|---|---|
| Model defaulted to corporate tone | Add explicit warm-and-direct tone |
| Model used clichés you hate | Banned-words list |
| Model padded with placeholders | "No placeholders" instruction |
| Model split the answer into too many sections | "Single block, no headings" |
| Model got the date format wrong | Specify the date format |
Restarting smartly is "I had a bad output, I learned three things from it, my new prompt has those three things baked in." Restarting stupidly is "I had a bad output, I'm going to try the same prompt again and hope it's better." The first works. The second is what you stop doing after this lesson.
End of Foundations
That's the end of the foundation course. By now you can:
- Write a prompt with the five-slot anatomy without thinking about it.
- Pick the right pattern (zero-shot, few-shot, persona, chain-of-thought, decomposition, self-critique) for the task.
- Read a bad output, name what's wrong, and ship a corrective follow-up.
- Recognise when restart is the right move and how to take learnings with you.
Hagar can now write prompts that consistently produce useful work. Modules 6–9 (taught by a different agent) take this into the API — system-prompt design, model selection, grounding, evaluation — and end with her writing a real production system prompt for Bayt Coffee.
Module 6 begins next: moving from chat tools to the API, and what changes when you do. :::
Sign in to rate