Lesson 1 of 42

Why prompts matter

What an LLM Actually Does

3 min read

Hagar just started her first marketing job in Cairo. She has used ChatGPT a handful of times for fun, but never on purpose for work. Tomorrow her manager wants three landing-page headlines on her desk. She types "write me three headlines" — and gets back something generic enough to fit any product on Earth. That feeling, that the model "almost helped but not really", is where this course begins.

Foundation knowledge map — the 9 modules and Hagar's arc

Claude Sonnet 4.5 · captured live
prompt
Write a 6-word product tagline for an Egyptian specialty coffee roaster.
output

The one-sentence model

A large language model is a system that, given some text, predicts what text most likely comes next. It does not know your company, your audience, or what you actually want. It only sees the words you gave it.

That sounds reductive, but it explains almost every "weird" output you'll ever get:

What you typedWhat the model heardWhat you got back
"Write me an email.""Make any email-shaped thing."A blank template with [Recipient name] placeholders.
"Make this better.""Add words that sound like 'better'."Buzzwords, no specifics.
"Summarise this meeting.""Compress these lines into fewer lines."A bullet list — but maybe not the bullets you needed.

The model is not lazy and not stupid. It is doing exactly what you asked: producing plausible next text. The skill of prompt engineering is making sure that "plausible next text" and "what I actually need" are the same thing.

Why this mental model matters

Once you stop thinking of an LLM as a person who "gets it" and start thinking of it as a very good autocomplete with general knowledge, three things become obvious:

  1. It cannot read your mind. Unsaid context is invisible.
  2. It will fill silence with averages. No tone specified? It picks the average tone.
  3. You can shape the output by shaping the input. Every constraint you write is a constraint on what comes back.

Want to see "predict the next token" up close? Type a half-sentence into the live GPT-2 below and watch which words the model considers next, with their probabilities lit up:

Live GPT-2 — type a prompt and watch which tokens the model considers next

Across the next nine modules, Hagar — and you — will turn that idea into a working skill. Modules 1–5 take you from confused user to someone who can write, repair, and reuse prompts in a normal workday. Modules 6–9 take you into the API and toward shipping a real system prompt.

Next: we'll prove this with one of the cleanest before/after pairs in the course. :::

Quiz

Module 1: Why Prompts Matter

Take Quiz
Was this lesson helpful?

Sign in to rate

FREE WEEKLY NEWSLETTER

Stay on the Nerd Track

One email per week — courses, deep dives, tools, and AI experiments.

No spam. Unsubscribe anytime.