Lesson 6 of 42

The anatomy of a prompt

The Five-Slot Prompt Skeleton

3 min read

In Module 1 you learned that the model fills silence with averages. Module 2 fixes that for good. From this lesson onward, every prompt you write — even the throwaway ones — has the same five slots. Once it's automatic, you stop forgetting things.

The five slots

#SlotWhat it answers
1RoleWho is the model right now? (lawyer, copywriter, code reviewer, friend over coffee)
2TaskWhat single thing should it do?
3ContextWhat does it need to know that isn't obvious?
4InputThe actual material to work on (the email, the code, the meeting notes).
5Output specThe exact shape the answer should come back in (length, format, tone, what to include or exclude).

Not every prompt needs all five — a quick rewrite request can skip role and context. But every prompt should at least consider all five before being sent. The slots you skip are the slots the model is going to invent on your behalf.

The 5-slot prompt skeleton

1. Role
2. Task
3. Context
4. Input
5. Output spec

The 5-slot skeleton — interactive

Click each slot to reveal the example fill-in. This is the same skeleton you'll use every day.

  1. 01 · Role

    Who is the model right now?

    You are a senior frontend engineer reviewing a junior's pull request.
  2. 02 · Task

    What single thing should it do?

    Review this React component for accessibility issues only.
  3. 03 · Context

    What does it need to know that isn't obvious?

    It will run on a public checkout page; must meet WCAG 2.1 AA.
  4. 04 · Input

    The actual material to work on.

    <the JSX code pasted in here>
  5. 05 · Output spec

    The exact shape the answer should come back in.

    Numbered list, max 5 issues, each with WCAG criterion + 1-line fix.

Click any slot to reveal an example.

A bare-bones template

You can copy this directly. It works for 80 percent of daily prompts.

Role: <who the model is right now>
Task: <the one thing to do>
Context: <what it needs to know>
Input: <the material — code, text, notes>
Output: <length, format, tone, what to include or exclude>

Yes, you can write all that as flowing prose instead. The model handles either. The reason to use the labelled form when you're learning is that you literally cannot leave a slot unanswered — the empty line stares back at you.

What changes when you adopt this

Three things, all good:

  1. Your prompts get shorter on average. That sounds wrong, but it's true: when slots are explicit, you stop padding with apologies and "could you please" filler.
  2. Your prompts get more reusable. The skeleton is identical between tasks, so you start saving good prompts and tweaking the input slot.
  3. Bad outputs get easier to debug. When something goes wrong, you can usually point at one slot and say "I was thin on context" or "I forgot to specify output format". Instead of restarting, you patch the slot. (We'll do that in Module 5.)

The next four lessons

Each of the next four lessons takes one of these slots and goes deeper:

  • Lesson 2: Role — when it's worth setting one, and when it's noise.
  • Lesson 3: Task vs Context — the most common confusion and how to fix it.
  • Lesson 4: Output spec — including the format-lock trick that makes outputs paste-ready.
  • Lesson 5: A worked example that uses all five slots together.

Next: the role slot — small change, surprisingly large effect. :::

Quiz

Module 2: Anatomy of a Prompt

Take Quiz
Was this lesson helpful?

Sign in to rate

FREE WEEKLY NEWSLETTER

Stay on the Nerd Track

One email per week — courses, deep dives, tools, and AI experiments.

No spam. Unsubscribe anytime.