Why prompts matter
What an LLM Actually Does
Hagar just started her first marketing job in Cairo. She has used ChatGPT a handful of times for fun, but never on purpose for work. Tomorrow her manager wants three landing-page headlines on her desk. She types "write me three headlines" — and gets back something generic enough to fit any product on Earth. That feeling, that the model "almost helped but not really", is where this course begins.
The one-sentence model
A large language model is a system that, given some text, predicts what text most likely comes next. It does not know your company, your audience, or what you actually want. It only sees the words you gave it.
That sounds reductive, but it explains almost every "weird" output you'll ever get:
| What you typed | What the model heard | What you got back |
|---|---|---|
| "Write me an email." | "Make any email-shaped thing." | A blank template with [Recipient name] placeholders. |
| "Make this better." | "Add words that sound like 'better'." | Buzzwords, no specifics. |
| "Summarise this meeting." | "Compress these lines into fewer lines." | A bullet list — but maybe not the bullets you needed. |
The model is not lazy and not stupid. It is doing exactly what you asked: producing plausible next text. The skill of prompt engineering is making sure that "plausible next text" and "what I actually need" are the same thing.
Why this mental model matters
Once you stop thinking of an LLM as a person who "gets it" and start thinking of it as a very good autocomplete with general knowledge, three things become obvious:
- It cannot read your mind. Unsaid context is invisible.
- It will fill silence with averages. No tone specified? It picks the average tone.
- You can shape the output by shaping the input. Every constraint you write is a constraint on what comes back.
Want to see "predict the next token" up close? Type a half-sentence into the live GPT-2 below and watch which words the model considers next, with their probabilities lit up:
Across the next nine modules, Hagar — and you — will turn that idea into a working skill. Modules 1–5 take you from confused user to someone who can write, repair, and reuse prompts in a normal workday. Modules 6–9 take you into the API and toward shipping a real system prompt.
Next: we'll prove this with one of the cleanest before/after pairs in the course. :::
Sign in to rate