Diagnosing bad outputs
Rewrite, Don't Restart
The biggest beginner mistake — bigger than vague prompts, bigger than no system prompt — is what happens after a bad output. Beginners hit "regenerate", or worse, start a brand new chat with a slightly different prompt. They throw away every signal the first attempt gave them.
This module teaches the opposite habit: when an output is wrong, the right move is almost never to restart. The right move is to look at what the model just gave you, name why it's wrong in one sentence, and write a corrective second prompt that fixes that specific thing.
The repair loop — what good iteration looks like
- 1Send prompt
First attempt — your best guess at the 5 slots.
- 2Read output
Don't skim. Read what it actually produced.
- 3Diagnose
Name the wrong thing in one sentence. Which slot was thin?
- 4Repair
Patch THAT slot. Don't rewrite the whole thing.
Why restarting is wasteful
Three reasons. All of them matter.
| Reason | What it means |
|---|---|
| You lose information. | The first output told you what the model misunderstood. Throwing it away means re-debugging from zero next time. |
| You burn tokens and time. | A targeted second prompt is shorter than a full re-prompt. |
| You don't get smarter. | Restarting trains no muscle. Repairing trains the muscle that matters. |
Iteration is the skill that separates someone who is "OK at AI" from someone who actually ships work with AI. Every great prompter you've watched has internalised it. They look at a bad output the way a doctor looks at a symptom — not as a verdict, but as evidence.
The shape of repair, in three moves
When an output disappoints you, here's the move:
- Read the output and name the gap. What is it doing that you didn't want? What's missing that you needed? One sentence.
- Write a follow-up prompt that targets that gap only. Not a re-prompt. A correction.
- Send it.
Two examples to make this concrete:
| Bad output | The gap | The follow-up prompt |
|---|---|---|
| Email is 200 words; you wanted 80. | Length is wrong. | "Cut this to 80 words. Keep the warmth but remove the apology paragraph." |
Code uses var; you wanted const. | Style is wrong. | "Same code, but use const and arrow functions everywhere." |
That's it. The chat already has the bad output in context, so you don't need to re-paste it. The follow-up is one sentence and the model has everything it needs.
When to repair vs when to restart
Repair almost always wins. The only times to legitimately restart from scratch:
- The output is structurally hopeless. The model misunderstood the task at the foundation level — you asked for a code review and it wrote a poem. (Even then, you can usually rescue with one sentence: "I meant a code review of this function, focusing on edge cases.")
- You realise your original prompt was fundamentally wrong. Wrong task, wrong audience, wrong scope. Restart with a better prompt — but use what you learned to write the better prompt.
- The chat got polluted. You already tried 5 follow-ups, the model is now giving you increasingly weird mash-ups of all your past instructions. Cut your losses and restart.
For 90% of cases, repair beats restart. The next three lessons turn that into a concrete process.
Next: the four diagnostic questions you ask yourself when an output is bad. :::
Sign in to rate