Codex prompts: how to get the most from OpenAI Codex
OpenAI relaunched Codex in mid-2025 as a terminal-based agentic coding agent — not a chat model. It reads your codebase, executes tasks autonomously across multiple files, runs tests, and commits. The prompt is a task instruction, not a conversation. This guide covers what makes a Codex prompt effective and provides 20 real prompts you can copy and adapt.
What makes a good Codex prompt
A good Codex prompt specifies scope, the files involved, acceptance criteria, and constraints. Unlike a ChatGPT prompt, Codex prompts are task instructions for an autonomous agent — vague prompts produce unexpected changes across your codebase.
The four elements every Codex prompt needs:
Scope
Which files, directories, or modules to touch — and which to leave alone.
Acceptance criteria
What "done" looks like — tests pass, specific behaviour, output format.
Constraints
What not to change, which libraries to use, performance requirements.
Context
Relevant patterns, existing conventions, or documentation to follow.
20 Codex prompts across 4 use cases
Feature implementation
"Add rate limiting middleware to src/middleware/. Limit each IP to 100 requests per 15 minutes, return 429 with a Retry-After header, store state in Redis. Don't modify existing middleware files."
"Implement CSV export for the reports table in src/pages/reports.tsx. Add an 'Export CSV' button next to the existing filters. Use the server-side data, not the client-side table state. Handle up to 50k rows without blocking the UI."
"Add email verification to the registration flow in src/auth/. Send a verification link via the existing SendGrid integration, store the token hash in a new verification_tokens table, and block login until verified. Update the Prisma schema."
"Implement dark mode toggle in the settings page. Use CSS variables defined in globals.css, persist the preference in localStorage, and respect the system preference on first visit. Update all components that use hardcoded colour values."
"Add pagination to the /api/users endpoint. Accept page and limit query params, default to page=1 and limit=20. Return total count and total pages in the response. Update the corresponding React component to render page controls."
Bug fixing
"Fix the race condition in src/services/orderService.ts where concurrent requests overwrite the cart total. Use an atomic update or pessimistic lock. Don't change the public API surface."
"Fix the memory leak in src/workers/notificationWorker.ts. The event listeners are being added on each reconnect but never removed. Clean up listeners in the disconnect handler. Add a test that verifies listener count stays stable over 100 reconnect cycles."
"Fix the timezone handling in src/utils/scheduler.ts. Events created in UTC+1 are showing an hour late in UTC. Use date-fns-tz to parse and format dates consistently. Ensure all existing tests still pass after the fix."
"Fix the Safari-specific layout break in src/components/Dashboard.tsx. The CSS grid collapses on viewports between 768px and 1024px. Use a flexbox fallback for that breakpoint range. Don't change the desktop or mobile layouts."
"Fix the flaky test in tests/integration/payment.test.ts that fails intermittently on CI. The mock server timing is the likely cause — increase the timeout and add retry logic for the webhook delivery assertion."
Refactoring
"Refactor src/routes/api.ts into separate route modules under src/routes/api/. Split by domain: users, orders, products, auth. Each module gets its own validation schema. Keep all existing endpoints responding on the same paths."
"Convert src/store/ from Redux class-based actions to Redux Toolkit slices. Keep the same state shape and action names so consuming components don't need changes. Run all tests after conversion."
"Replace all instances of moment.js in src/ with date-fns. Update imports, rewrite formatting chains, and ensure the 40+ date displays in the app produce identical output. Remove moment from package.json when done."
"Extract the repeated pagination logic from src/handlers/ into a shared src/utils/pagination.ts helper. Accept page, limit, and total as params, return offset, hasNext, and hasPrev. Update all 8 handlers that duplicate this logic."
"Migrate the database queries in src/reports/ from raw SQL to the existing Prisma ORM. Keep query performance within 10% of the raw SQL baseline. Don't touch the report calculation logic."
Testing
"Write integration tests for src/routes/auth.ts covering: successful registration, duplicate email rejection, successful login, wrong password rejection, token refresh, and token expiry. Use the existing test database setup in tests/helpers/."
"Write unit tests for src/utils/validators.ts. Cover every exported function with valid inputs, invalid inputs, edge cases (empty strings, unicode, very long values), and the exact error messages returned. Aim for 100% branch coverage."
"Add snapshot tests for all 12 components in src/components/forms/. Render each with default props and with error states. Store snapshots in __snapshots__/ alongside each component."
"Write E2E tests for the checkout flow: add item to cart, proceed to checkout, fill shipping info, select payment, confirm order, verify success page. Use Playwright with the existing config in playwright.config.ts."
"Add load tests for the /api/search endpoint using k6. Test with 50, 200, and 1000 concurrent users. Measure p50, p95, and p99 latency. Write the test script to tests/load/search.js."
Codex vs GitHub Copilot — which to use
They serve different purposes. Codex is an autonomous agent — you give it a task and it executes across multiple files, runs tests, and commits. Best for feature implementation, multi-file refactors, and any task where you want to describe the outcome and let the agent work. GitHub Copilot is an inline suggestion tool — it completes the code at your cursor based on context. Best for writing functions line by line, generating boilerplate, and getting suggestions as you type.
Most developers use both: Copilot for the flow of daily coding, Codex for larger autonomous tasks. For a deeper comparison across AI coding tools including Claude Code, see our ChatGPT vs Claude comparison and our GitHub Copilot prompts guide.
Frequently asked questions
What are Codex prompts?
Codex prompts are task instructions for OpenAI's Codex agent — a terminal-based tool that autonomously reads, edits, and tests code across your codebase. Unlike chat prompts, they specify scope, files, constraints, and acceptance criteria.
Is Codex the same as ChatGPT?
No. ChatGPT is a conversational AI that responds to messages. Codex is an agentic coding tool that executes tasks autonomously in your codebase — it reads files, makes edits, runs tests, and commits. The prompting style is completely different.
How do Codex prompts differ from regular prompts?
Regular AI prompts ask for information or generation in a chat interface. Codex prompts are task specifications — they define what to change, where to change it, what constraints to follow, and how to verify the result. Think of them as brief engineering tickets, not questions.
Write better prompts for any AI coding tool
Structured prompts get better results from every model. Try our free generators.