Lesson 9 of 42

The anatomy of a prompt

The Output Spec — and the Format-Lock Trick

4 min read

If you only learn one slot well, learn this one. The output spec is the difference between a model output you can paste straight into Slack and one you have to manually reformat for ten minutes.

What the output spec covers

Sub-slotExample values
Length"under 80 words", "exactly 3 sentences", "5 bullet points"
Format"JSON object", "markdown table", "numbered list", "plain prose"
Tone"warm and professional", "blunt", "academic"
Inclusions"include the deadline date", "always end with a sign-off"
Exclusions"no preamble", "do not use the word 'unfortunately'", "no markdown fence"

Most beginner prompts skip three or four of these. That's why the model decides for you and you spend time editing afterwards.

Loose vs locked output spec

Default

Loose spec

LengthModel decides
FormatModel decides
Inclusions / exclusionsNone stated
Cons
  • Output usually too long
  • Adds markdown you don't want
  • Manual reformat after
Paste-ready

Locked spec

LengthExact words / sentences
FormatNamed (JSON, table, prose)
Inclusions / exclusionsBoth stated
Pros
  • Copy and paste in one step
  • Programmatic parsing works
  • Far less re-prompting

The format-lock trick

Sometimes you don't just want a particular tone — you need a specific machine-readable shape. Pure JSON for an API. A specific number of bullets. A table with named columns. For these, you use a format-lock: an explicit, mechanical instruction that pins down the exact shape.

Here's a real example. We asked the model:

Extract the structured data from this sentence and return ONLY a JSON object
with keys: name, age, city. No prose, no markdown fence.

"Hi, I'm Mariam, 29, and I just moved to Cairo from Alexandria."

Captured output:

```json
{
  "name": "Mariam",
  "age": 29,
  "city": "Cairo"
}
```

Captured from Claude Sonnet 4.5 (claude-sonnet-4-5) on 2026-04-27. Re-runs may differ slightly.

The result is exactly the JSON we asked for — but Claude wrapped it in a markdown fence anyway, despite the explicit "no markdown fence" instruction. This is a real, common failure of format-locks. It teaches two lessons:

  1. Constraints help but are not magic. Even a tight, well-specified prompt can produce a small drift. The format-lock got the content right and the structure right; the markdown fence is a wrapper the model added on top.
  2. Build a tiny strip-step into your code. When you call an LLM for structured output, always run a one-line cleaner that strips ```json and ``` markers before you parse. That's standard practice and saves you re-running the prompt.

Output spec checklist

A robust output spec answers all five of these:

  1. How long?
  2. What format?
  3. What tone?
  4. What MUST appear?
  5. What MUST NOT appear?

If you can answer all five before sending the prompt, the output is going to need very little cleanup.

A worked-up version

Here's a complete output spec for a customer-reply task:

Output:
- Plain prose, no markdown.
- Exactly 4 short paragraphs.
- Tone: warm, direct, no corporate fluff.
- Must acknowledge the specific issue in the first sentence.
- Must end with "— Bayt Coffee team" on its own line.
- Must not contain the words "unfortunately", "as per", "kindly".

That's seven explicit constraints in seven lines. The output you get back is shaped before the model even starts writing.

Next: a worked example that uses all five slots together — role, task, context, input, and output spec. :::

Quiz

Module 2: Anatomy of a Prompt

Take Quiz
Was this lesson helpful?

Sign in to rate

FREE WEEKLY NEWSLETTER

Stay on the Nerd Track

One email per week — courses, deep dives, tools, and AI experiments.

No spam. Unsubscribe anytime.