Working inside Cursor / Claude Code / Aider / Copilot
Copilot: shaping completions with comments
GitHub Copilot, Tabnine, JetBrains Code Completion, and similar inline completion tools don't take prompts directly. They take the file as you wrote it up to the cursor. The "prompt" is the code above. That includes the imports, the function name, the parameter names, the docstring, and any comments you wrote on the way down.
This makes the prompt-engineering pattern simple but counterintuitive: you don't write a separate prompt — you write the file in a way that makes the next completion obvious.
Compare two ways to start a function. Both are syntactically valid. The completion you get is wildly different.
Weak setup:
def parse(s):
# cursor here
The completion has nothing to anchor on. Copilot picks a generic JSON parse, or makes up an internal Parser class, or emits a placeholder. You'll iterate three times before getting close.
Strong setup:
def parse_invoice_line(line: str) -> dict:
"""
Parse a single invoice line like '3 x Latte @ 65.50' into a dict
with keys: qty (int), item (str), unit_price (float).
Raises ValueError if the line doesn't match the expected format.
"""
# cursor here
Now the completion is almost forced. The function name, type hint, and docstring narrow the search to one obvious implementation. You'll get something close to correct on the first try.
The pattern: the more constrained the surface area above the cursor, the better the completion. Specifically, these elements move the completion toward correctness:
| Element | What it constrains |
|---|---|
| Specific function name | The high-level intent |
| Type hints on parameters | The input shape |
| Type hint on return | The output shape |
| Docstring with example I/O | The exact transformation |
| Imports already present | The libraries the model should use |
| A nearby similar function | The team's style |
You'll notice that all of these are things you should be writing anyway. They're the documentation a careful colleague would put before implementing. Inline-completion tools turn that discipline into a productivity multiplier — the more careful your setup, the better your completion.
What the completion engine "sees" above the cursor:
A useful trick when the completion is wrong: write a comment right above the cursor describing what you expected. Don't delete the wrong completion — leave it. The next completion sees both the wrong attempt and your correction comment, and adjusts. Three or four turns of this and the model converges on the implementation you wanted.
def parse_invoice_line(line: str) -> dict:
"""..."""
# Note: must handle whitespace around the 'x' and '@' tokens
# Note: unit_price is float, not int — '@ 65' returns 65.0
# cursor here
Each comment is a constraint the next completion respects. You're effectively writing a prompt — but in the natural language a future reader will also benefit from. The comments stay in the file as documentation; they don't disappear like a chat message.
This pattern matters more than it looks because completion tools are the LLM most engineers use most. A team that writes good function names and docstrings gets dramatically better completions than a team that doesn't — without changing tools, models, or any settings.
Next up: conventional commit messages, and why they're the easiest prompt-engineering win. :::
Sign in to rate