The code-generation skeleton
Why "write me a function" fails
Meet Karim. He's a mid-level backend engineer who started using AI coding assistants six months ago. He gets correct-looking code that breaks subtly in production. He wants to write prompts that produce code he'd actually accept in code review. By the end of this course, he'll ship a real 200-line GitHub PR using only prompts.
Same goal, two prompts — toggle to see what Karim should have written
Karim's first instinct is the same as most engineers'. He types:
Write a Python function to dedupe a list.
Here is what Claude Sonnet 4.5 actually returned:
# Python function to dedupe a list
Here are several approaches to remove duplicates from a list:
## 1. Using set (simplest, but doesn't preserve order)
def dedupe_list(lst):
"""Remove duplicates from a list (order not preserved)"""
return list(set(lst))
## 2. Using dict.fromkeys() (preserves order, Python 3.7+)
def dedupe_list(lst):
"""Remove duplicates while preserving order"""
return list(dict.fromkeys(lst))
Captured from Claude Sonnet 4.5 (claude-sonnet-4-5) on 2026-04-27. Re-runs may differ slightly.
The model returned two functions with the same name. It guessed at the trade-off Karim cared about. It did not return tests. It did not commit to a signature. The output is a tutorial, not a unit of code Karim can paste into a file.
Why did this happen? The prompt left every important decision to the model:
- Should order be preserved?
- Are strings case-sensitive?
- Is performance the goal, or readability?
- What signature should
defuse? - Should the result include tests?
When you don't pin those down, the model picks a defensible default — and "defensible" usually means show the user multiple options and let them choose. That's helpful for a tutorial. It's terrible for a code change.
This course gives you a four-block skeleton — INTENT, CONSTRAINTS, TESTS, FORMAT — that forces the model into one specific function with one specific shape. You'll see the same dedupe task done correctly in lesson 5.
The course flow at a glance:
Vague vs skeleton-shaped prompts produce wildly different outputs:
Vague prompt vs skeleton-shaped prompt
Vague: "write me a function to dedupe a list"
- Model picks defaults you didn't pin
- Output is a tutorial, not a unit of code
- You burn time stripping prose and choosing
Skeleton: INTENT + CONSTRAINTS + TESTS + FORMAT
- Signature pinned by you, not the model
- Asserts prove correctness before commit
- Output drops straight into a .py file
Next up: the four-block skeleton itself. :::
Sign in to rate