GitHub Copilot prompts: how to get better code completions
GitHub Copilot reads the code and comments above your cursor as context. The more specific your comments — function purpose, input types, edge cases, expected output — the more accurate the completion. This guide shows the prompt patterns that produce production-ready code, with before/after examples for each use case.
How GitHub Copilot reads your prompts
GitHub Copilot reads the code and comments above your cursor as context. The more specific your comments — function purpose, input types, edge cases, expected output — the more accurate the completion. A vague comment like "// sort the array" gets a basic bubble sort. A specific comment like "// sort users by lastLogin descending, nulls last, stable sort" gets exactly what you need.
Copilot also uses the file's existing imports, type definitions, and surrounding function signatures as context. A well-typed codebase with clear interfaces produces better completions than a loosely-typed one. The prompt isn't just your comment — it's everything Copilot can see in the file and related files.
GitHub Copilot prompt examples by use case
Function generation
Vague comment → weak output
// validate the form Produces a generic function that checks if fields are empty. No specific validation rules, no error messages, no type safety.
Specific comment → accurate output
// Validate registration form:
// - email: valid format, max 254 chars
// - password: min 12 chars, 1 uppercase,
// 1 number, 1 special char
// - name: 2-100 chars, letters only
// Return field-level errors object Produces a typed validation function with regex checks, field-specific error messages, and the exact return shape you specified.
Unit tests
Vague comment → shallow tests
// test the calculateTotal function Produces 2-3 tests with obvious inputs. Misses edge cases like empty arrays, negative values, or rounding errors.
Specific comment → thorough tests
// Test calculateTotal(items: CartItem[]):
// - empty array returns 0
// - single item matches item.price * qty
// - mixed taxable/non-taxable items
// - quantity > 1000 doesn't overflow
// - rounding to 2 decimal places
// Use describe/it blocks, name each test Produces named test cases covering all specified scenarios, including the edge cases that cause production bugs.
Code review
Vague comment → surface review
// review this code Produces generic feedback about naming and formatting. Misses logic errors and performance issues.
Specific comment → actionable review
// Review for:
// - race conditions in concurrent access
// - SQL injection vectors
// - memory leaks from unclosed connections
// - error handling gaps
// List each issue with severity and fix Produces a structured review identifying specific issues with severity ratings and suggested fixes.
Refactoring
Vague comment → cosmetic changes
// refactor this Renames variables and adds comments. Doesn't change the structure or address the real problem.
Specific comment → structural refactor
// Refactor to extract the validation
// logic into a separate validateInput()
// function. The main handler should
// only orchestrate: validate -> process
// -> respond. Keep existing error codes. Produces a clean separation of concerns with the validation extracted and the handler simplified.
Documentation
Vague comment → generic docs
// add comments Produces obvious comments that restate the code. "Returns true if valid" above a function called isValid.
Specific comment → useful docs
// JSDoc for public API:
// - document purpose, params, return type
// - include @example with real usage
// - note side effects and thrown errors
// - reference related functions Produces complete JSDoc with typed params, an example call, and notes on edge cases and side effects.
Copilot Chat vs inline prompts — which to use
GitHub Copilot has two prompting modes. Inline comments are best for generation tasks — write functions, generate tests, produce documentation, create boilerplate. The comment sits above the cursor, Copilot reads it and completes. Copilot Chat (the sidebar) is best for review and explanation tasks — "explain this function," "find the bug in this file," "suggest how to handle this edge case." Chat gives you a multi-turn conversation; inline gives you a single completion.
For autonomous multi-file tasks where you want to describe the outcome and let an agent work across your codebase, consider Cursor — our Cursor prompt generator is built for that workflow.
Frequently asked questions
What are GitHub Copilot prompts?
GitHub Copilot prompts are the comments and code context above your cursor that Copilot reads to generate completions. Specific, detailed comments produce more accurate code. Vague comments produce generic output.
How to write better GitHub Copilot prompts?
Include the function purpose, input types, edge cases to handle, expected output format, and constraints. The more context you provide in the comment, the more accurate the completion. Think of it as writing a mini-spec, not a wish.
GitHub Copilot vs Cursor — which should I use?
GitHub Copilot is best for inline code completions as you type — fast, lightweight, integrated into VS Code and other IDEs. Cursor is better for autonomous multi-file tasks where you describe what you want changed and the agent works across your codebase. Many developers use both: Copilot for daily coding flow, Cursor for larger refactors and feature work.
Write better prompts for any AI coding tool
Structured prompts get better results from every model. Try our free generators.