The anatomy of a prompt
Worked Example: All Five Slots Together
4 min read
Time to see the full skeleton in one prompt. Hagar's friend on the engineering team asks her to glance at a checkout button before it ships — they want a quick accessibility sanity check. She doesn't write code, but she knows the prompt skeleton. Here's exactly what she sends.
The prompt
Role: You are a senior frontend engineer reviewing a junior's code.
Task: Review the React component below for accessibility issues only.
Context: This will run on a public-facing checkout page; it must meet WCAG 2.1 AA.
Input:
```tsx
function CheckoutButton({ onPay }) {
return <div onClick={onPay} style={{ background:'blue', color:'white', padding:8 }}>Pay</div>
}
```
Output: a numbered list of issues, each with the WCAG criterion and a 1-line fix. Max 5 items.
Every slot is filled:
| Slot | Filled with |
|---|---|
| Role | senior frontend engineer reviewing junior code |
| Task | review for accessibility issues only |
| Context | public-facing checkout, must meet WCAG 2.1 AA |
| Input | the actual React component |
| Output | numbered list, each with a WCAG criterion and one-line fix, max 5 items |
The captured output
# Accessibility Issues
1. **WCAG 4.1.2 (Name, Role, Value)** - Not keyboard accessible; use `<button>` element instead of `<div>`
2. **WCAG 2.1.1 (Keyboard)** - Missing keyboard interaction; add `tabIndex={0}` and `onKeyDown` handler, or use semantic `<button>`
3. **WCAG 1.4.3 (Contrast Minimum)** - Blue background with white text may fail contrast ratio; verify 4.5:1 minimum or use darker blue
4. **WCAG 4.1.2 (Name, Role, Value)** - Missing `role="button"` attribute if keeping `<div>`; semantic `<button>` provides this automatically
5. **WCAG 2.5.5 (Target Size)** - 8px padding may create too small a touch target; ensure minimum 44×44px clickable area
**Recommended fix:**
```tsx
<button onClick={onPay} style={{ background:'#0000CC', color:'white', padding:'12px 24px' }}>Pay</button>
*Captured from Claude Sonnet 4.5 (claude-sonnet-4-5) on 2026-04-27. Re-runs may differ slightly.*
## Why this output is good
Look at what came back:
- **Five items, exactly as asked.** The `Max 5 items` instruction was honoured.
- **Each item names a WCAG criterion.** "WCAG 4.1.2", "WCAG 1.4.3" — the output spec said this and the model delivered.
- **Each item has a one-line fix.** Hagar can hand this list to her engineer friend without rewriting it.
- **Bonus: a recommended fix code block.** The model went slightly beyond the spec in a useful way — common when you give a clear role.
## What you'd lose without each slot
A quick thought experiment. Imagine running the same prompt with one slot removed:
| Slot removed | Likely degradation |
|---|---|
| Role | Generic friendly "this code could be better" tone, no WCAG citations. |
| Task ("accessibility issues only") | The model would also comment on style, naming, performance — useful but not what was asked. |
| Context (WCAG 2.1 AA) | Vague "you should make this accessible" suggestions, no specific criteria. |
| Input | The model would invent a generic checkout button to critique. |
| Output spec | Long flowing prose. Hard to scan. No numbered list. No 5-item cap. |
This is the entire point of the skeleton: each slot you fill removes one source of guesswork. With all five filled, the model basically has no choice but to give you what you wanted.
*Next module: ten reusable prompt patterns — when you reach for which one.*
:::
Was this lesson helpful?
Sign in to rate