Capstone — ship a 200-line PR via prompts only
Picking the right issue
The wrong issue burns a week. The right issue ships in two evenings. The difference is rarely about how hard the bug is — it's about how clearly the bug is specified and how isolated the change is from the rest of the codebase.
Use this rubric when scanning a repo's issue tracker:
| Property | Good signal | Bad signal |
|---|---|---|
| Reproducer | Issue includes the input + actual + expected output | Issue says "doesn't work" with no details |
| Scope | Bug is in one file or one function | Bug touches "the whole pipeline" |
| Tests | Repo has decent test coverage of the affected area | Repo has 0% coverage where you're touching |
| Maintainer activity | Recent merges, fast review turnaround | PRs sit open for months |
| Familiarity | You use the project; you understand the public surface | You discovered the project last week |
| Size estimate | Maintainer estimates ≤ 1 day | Tagged "epic" or "needs design" |
You want as many "good signal" rows as possible. One or two "bad signal" rows is fine if compensated. Three or more, walk away.
The "good first issue" tag is a useful filter but not a guarantee. Some repos use it generously; others use it sparingly. Read the issue itself, not the tag. A "good first issue" with vague reproduction steps is worse than an untagged bug with a perfect reproducer.
A specific anti-pattern to avoid: the open-ended refactor. Issues that say "this code is messy, please clean up" are traps. There's no objective definition of done. The maintainer might have specific opinions you don't share. You'll do the work, the PR will get bikeshedded, and you'll close it without merging.
Concrete bug fixes are the sweet spot. They have an objective definition of done — the test that demonstrates the bug now passes, plus all existing tests still pass. You can prompt your way through the entire fix:
- Localization prompt (Module 2) — "find the bug given this failing test"
- Codegen skeleton (Module 1) — for any new function the fix introduces
- Type-safety lock (Module 3) — if the fix touches types
- Code review prompt (Module 4) — self-review before pushing
- Conventional commit + PR description (Module 5) — for the merge
Five prompt patterns from five different modules, end-to-end on one PR. That's exactly what the rubric in lesson 4 measures.
If you cannot find a clean issue on a repo you know, two safe alternatives:
- Documentation fix on a real repo. Find a section of docs that's wrong or outdated. The PR is small; the maintainers usually merge fast; it teaches you the contribution flow without language-specific complexity.
- A small feature on your own side project. If you have a personal repo with one or two users, that counts as "real." The advantage is total control of the bar; the disadvantage is the absence of a third-party reviewer.
Whatever you pick, write down the issue link, your scoping decisions, and your initial size estimate before you start prompting. That document becomes the first artifact of the capstone.
Next up: the code-explainer system prompt for getting up to speed on the file you'll touch. :::
Sign in to rate