The code-generation skeleton

Why "write me a function" fails

4 min read

Meet Karim. He's a mid-level backend engineer who started using AI coding assistants six months ago. He gets correct-looking code that breaks subtly in production. He wants to write prompts that produce code he'd actually accept in code review. By the end of this course, he'll ship a real 200-line GitHub PR using only prompts.

Same goal, two prompts — toggle to see what Karim should have written

Vague
Write a Python function to dedupe a list.

Karim's first instinct is the same as most engineers'. He types:

Write a Python function to dedupe a list.

Here is what Claude Sonnet 4.5 actually returned:

# Python function to dedupe a list

Here are several approaches to remove duplicates from a list:

## 1. Using set (simplest, but doesn't preserve order)
def dedupe_list(lst):
    """Remove duplicates from a list (order not preserved)"""
    return list(set(lst))

## 2. Using dict.fromkeys() (preserves order, Python 3.7+)
def dedupe_list(lst):
    """Remove duplicates while preserving order"""
    return list(dict.fromkeys(lst))

Captured from Claude Sonnet 4.5 (claude-sonnet-4-5) on 2026-04-27. Re-runs may differ slightly.

The model returned two functions with the same name. It guessed at the trade-off Karim cared about. It did not return tests. It did not commit to a signature. The output is a tutorial, not a unit of code Karim can paste into a file.

Why did this happen? The prompt left every important decision to the model:

  • Should order be preserved?
  • Are strings case-sensitive?
  • Is performance the goal, or readability?
  • What signature should def use?
  • Should the result include tests?

When you don't pin those down, the model picks a defensible default — and "defensible" usually means show the user multiple options and let them choose. That's helpful for a tutorial. It's terrible for a code change.

Code course knowledge map

This course gives you a four-block skeleton — INTENT, CONSTRAINTS, TESTS, FORMAT — that forces the model into one specific function with one specific shape. You'll see the same dedupe task done correctly in lesson 5.

The course flow at a glance:

Vague vs skeleton-shaped prompts produce wildly different outputs:

Vague prompt vs skeleton-shaped prompt

Tutorial

Vague: "write me a function to dedupe a list"

Functions returned2 (same name)
Includes testsNo
Type hintsNo
Ships without editsNo
Cons
  • Model picks defaults you didn't pin
  • Output is a tutorial, not a unit of code
  • You burn time stripping prose and choosing
Pasteable

Skeleton: INTENT + CONSTRAINTS + TESTS + FORMAT

Functions returned1
Includes testsYes (4 asserts)
Type hintsYes
Ships without editsYes
Pros
  • Signature pinned by you, not the model
  • Asserts prove correctness before commit
  • Output drops straight into a .py file

Next up: the four-block skeleton itself. :::

Quiz

Module 1: The Codegen Skeleton

Take Quiz
Was this lesson helpful?

Sign in to rate

FREE WEEKLY NEWSLETTER

Stay on the Nerd Track

One email per week — courses, deep dives, tools, and AI experiments.

No spam. Unsubscribe anytime.