Skip to main content
Ganesh Joshi
Back to Blogs

Writing better prompts for code

February 13, 20265 min read
AI & coding
Code on a screen, programming

The quality of AI-generated code often depends on how you ask. Vague prompts lead to generic or wrong answers; specific ones lead to code you can use with minimal edits. Here is how to write prompts that get better results.

The core principle

Reduce ambiguity. The more specific you are, the better the output.

AI fills in gaps with assumptions. If you do not specify the language, it guesses. If you do not mention your framework version, it uses whatever pattern it thinks is common. If you do not explain constraints, it ignores them.

Your job is to close those gaps with explicit information.

Specify language and runtime

Always mention the language and, if it matters, the version or runtime.

Vague prompt Specific prompt
"Write a function to parse this" "Write a TypeScript function to parse this JSON"
"Create a React component" "Create a React 18 functional component with hooks"
"Make an API endpoint" "Create a Next.js 14 App Router API route"
"Write a database query" "Write a PostgreSQL query using Prisma ORM"

Specifying the environment avoids outdated patterns (like class components when you want hooks) and ensures the code runs in your setup.

Include constraints and rules

Spell out what the code must or must not do:

Technical constraints:

  • "No external dependencies"
  • "Must work in Node 18"
  • "Use only the standard library"
  • "Return an object with keys success and data"

Project conventions:

  • "No class components"
  • "Use our existing api client from @/lib/api"
  • "Follow our naming convention: use prefix for hooks"
  • "Error messages should be user-friendly strings"

Performance and security:

  • "Must handle arrays of up to 100,000 items"
  • "Do not log sensitive data"
  • "Validate all input before processing"

The model cannot guess your project's rules. State them explicitly.

Example: constraints in action

Prompt: Write a TypeScript function that validates an email address.

Constraints:
- Do not use external libraries (use regex only)
- Return { valid: boolean, error?: string }
- Error messages should be user-friendly
- Handle edge cases: empty string, null, undefined

This prompt gives clear boundaries. The output will match your expectations much better than "write email validation."

Paste relevant context

A short snippet of your types, function signatures, or existing code helps significantly:

Instead of: "Write a function that filters users by role"

Try:

Here is my User type:

type User = {
  id: string;
  name: string;
  email: string;
  role: 'admin' | 'editor' | 'viewer';
};

Write a function that takes User[] and a role, returns users with that role.

Context reduces wrong assumptions. The AI sees the exact type shape, not a guess.

What context to include

Context type When to include
Type definitions When working with typed data
Function signatures When extending or modifying existing code
Existing code patterns When you want consistency with your codebase
Error messages When debugging
Framework configuration When behavior depends on settings

Break big tasks into steps

Large prompts lead to large, hard-to-verify outputs. Break work into focused requests:

Monolithic prompt Step-by-step prompts
"Build a full auth flow" 1. "Write a function that validates this JWT and returns the payload"
2. "Create a React hook that calls that function and returns user and loading state"
3. "Write a middleware that protects routes using this validation"

Step-by-step prompts:

  • Keep the model focused on one task
  • Make it easier to verify each piece
  • Allow you to correct errors before they compound
  • Give you natural checkpoints

Iterate with follow-ups

If the first output is close but wrong, reply with a precise correction:

Good follow-ups:

  • "Use optional chaining on line 5"
  • "This should handle an empty array; return [] instead of undefined"
  • "Rename getData to fetchUserProfile to match our convention"
  • "Add a try/catch block around the fetch call"

Less helpful follow-ups:

  • "This is wrong" (too vague)
  • "Make it better" (unclear direction)
  • "Fix it" (no information about what to fix)

Reference line numbers or paste the wrong part and say what should change. Follow-ups are part of prompt craft; use them to steer the result.

Common prompt mistakes

Mistake Example Fix
No language specified "Write a function to sort this" "Write a TypeScript function to sort this by date"
No framework version "Create a React component" "Create a React 18 functional component with hooks"
No constraints "Write email validation" "Write email validation using regex, no dependencies, return { valid, error }"
No context "Filter users by role" "Given this User type, filter users by role..."
Too big "Build the entire checkout flow" Break into payment validation, cart summary, order creation
Vague corrections "This is wrong, fix it" "Line 8 should handle null input by returning early"

Prompt templates

Here are templates for common scenarios:

New function

Language: TypeScript
Framework: [framework/none]

Task: Write a function that [description].

Input: [describe input types]
Output: [describe return type]

Constraints:
- [constraint 1]
- [constraint 2]

Context:
[paste relevant types or code]

Modify existing code

Here is my current code:

[paste code]

Change: [describe what to modify]

Constraints:
- Keep existing behavior for [X]
- [additional constraints]

Debug issue

This code throws an error:

[paste code]

Error message: [paste error]

Expected behavior: [describe what should happen]
Actual behavior: [describe what happens]

Summary

Better prompts lead to better code. To write effective prompts:

  1. Specify language and runtime: "TypeScript with Node 20" not just "JavaScript"
  2. Include constraints: Dependencies, conventions, performance requirements
  3. Paste context: Types, signatures, existing code patterns
  4. Break into steps: Small, focused requests are easier to verify
  5. Iterate precisely: Reference specific lines and describe exact fixes

For more habits that work with AI coding tools, see AI-assisted coding: practical tips and Pair programming with AI.

Frequently Asked Questions

Be specific about language, framework, and version. Include constraints and rules. Paste relevant context like types or existing code. Break big tasks into steps. Iterate with follow-up corrections.

Vague prompts lead to generic answers. The AI guesses at language, framework version, and patterns. Specifying 'TypeScript with React 18 hooks' instead of just 'React' gets more accurate results.

Yes. Including types, function signatures, or existing code reduces wrong assumptions. 'Here is my User type; write a filter function' is more reliable than describing the shape in words.

Use precise follow-up corrections. Reference line numbers or paste the wrong part. Say what should change: 'use optional chaining here' or 'handle the empty array case.' Follow-ups are part of prompt craft.

Multiple small tasks. Break 'build full auth flow' into 'validate this JWT' then 'create a React hook for user state.' Smaller prompts are more focused, easier to verify, and easier to correct.

Related Posts