Lesson 14 of 42

Your first 10 patterns

Chain-of-Thought Light: Show Your Working

3 min read

For most everyday writing tasks, the model can answer instantly without showing intermediate steps. But for tasks that require reasoning — math, logic puzzles, multi-step deductions, careful comparisons — asking the model to think out loud before answering noticeably improves accuracy.

This is called chain-of-thought prompting. The "light" in the lesson title is intentional: in 2026, big modern models reason competently without an explicit prompt. So you don't need to over-engineer it. A single instruction — "think step by step before answering" — is usually enough.

A worked example

A small word problem with a tempting wrong answer if you don't think carefully:

A bookstore offers a buy-2-get-1-free promo where the free book is the
cheapest of the three. I pick books priced 120, 90, and 60 EGP.
What do I pay?

Think step by step before answering. Then give the final number on its
own line as: ANSWER: <number> EGP.

Captured output:

I need to figure out what I pay under a buy-2-get-1-free promotion where the cheapest book is free.

The three books are priced at:
- 120 EGP
- 90 EGP
- 60 EGP

Under the buy-2-get-1-free promotion, the cheapest book (60 EGP) becomes free.

So I pay for:
- 120 EGP
- 90 EGP

Total payment: 120 + 90 = 210 EGP

ANSWER: 210 EGP

Captured from Claude Sonnet 4.5 (claude-sonnet-4-5) on 2026-04-27. Re-runs may differ slightly.

Three things worked:

  1. The model identified the cheapest book (60 EGP) and applied the rule.
  2. It showed each step explicitly — list the prices, identify which is free, sum the rest.
  3. The final answer is in the exact format requestedANSWER: 210 EGP on its own line, easy to extract programmatically.

Why this helps

When a model "thinks out loud", each step it writes becomes context for the next step. The intermediate steps make wrong reasoning visible, and the model itself often catches its own slip mid-stream. Without that scratchpad, the model is just emitting the answer in a single confident token — and confidence is no protection against being wrong.

When to use it

Reach for chain-of-thought when:

  • The task involves arithmetic or counting.
  • The task involves multi-step deductions ("if A and not B, then C").
  • The task requires comparing options against several criteria.
  • You've seen the model give wrong but confident answers on similar problems.

When to skip it

Skip it when:

  • The task is purely formatting or rewriting (no reasoning needed).
  • You need a one-token answer fast (classification, yes/no).
  • Latency or cost matters and the model handles it fine without thinking out loud.

The format-extraction trick

Notice the output spec in the example: "Then give the final number on its own line as: ANSWER: EGP." This pattern is gold for any system that needs to parse the answer programmatically. Let the model think freely and end with a fixed, parseable line. Best of both worlds.

In 2026, models like Claude and GPT have native "thinking" modes — extended reasoning that happens before the visible response. When that's on, you often don't need an explicit chain-of-thought prompt. But for plain chat without thinking enabled, the prompt-level technique in this lesson is still the simplest way to upgrade accuracy on reasoning tasks.

Next: self-critique — when you ask the model to grade its own first answer. :::

Quiz

Module 3: Your First Patterns

Take Quiz
Was this lesson helpful?

Sign in to rate

FREE WEEKLY NEWSLETTER

Stay on the Nerd Track

One email per week — courses, deep dives, tools, and AI experiments.

No spam. Unsubscribe anytime.