AI can produce code that looks right but is wrong in subtle ways. A short review step before you merge catches most issues and keeps the bar high. Use this checklist for any generated code that touches behavior or dependencies.
Why review AI-generated code?
AI coding assistants are useful but imperfect. They can:
- Use wrong or deprecated API names
- Miss edge cases and error handling
- Introduce security vulnerabilities
- Ignore your project's conventions and patterns
- Generate code that compiles but behaves incorrectly
Reviewing AI output is not optional. Treat every suggestion as a draft that needs verification before it becomes part of your codebase.
Checklist: what to verify
Here is a practical checklist for reviewing AI-generated code:
1. Check APIs and library versions
AI models are trained on code from various versions and may suggest deprecated or incorrect methods. To verify:
- Open the official docs for the library and version you use.
- Search for function/class names in the generated code. Confirm they exist and have the expected signatures.
- Check deprecation warnings: If the IDE or linter flags a method, find the current alternative.
- Verify behavior: Even if the method exists, make sure it does what you expect. Some APIs have changed behavior between versions.
This step catches many common AI mistakes. A quick docs search takes seconds and prevents hard-to-debug issues later.
2. Test edge cases and empty input
Generated code often handles the happy path only. AI may assume all inputs are valid, arrays are non-empty, and objects have all properties. To verify:
- Try empty arrays, null, undefined, or missing fields. Does the code handle them gracefully?
- Try invalid input: What happens with wrong types, negative numbers, or malformed data?
- Add tests: Write at least one test for an edge case. If the code breaks, fix it before merging.
Edge-case bugs are common in AI output. A few minutes of testing saves hours of debugging later.
3. Look for security issues
AI can introduce subtle security problems. Review carefully for:
- Input validation: Is user input validated or sanitized before use?
- Injection vulnerabilities: SQL, command, or code injection. Are queries parameterized? Are shell commands properly escaped?
- Sensitive data handling: Does the code log, expose, or transmit sensitive data inappropriately?
- Authentication and authorization: Are access checks correct? Are all branches and error paths handled?
- Cryptography: AI often makes cryptographic mistakes. Do not trust AI for crypto code without expert review.
For auth, payment, or privacy-related code, review every line. Get a second opinion if you are unsure.
4. Run the code and the test suite
Do not merge without running the code. To verify:
- Run the app or the relevant feature. Does it work as expected?
- Run the test suite. Do existing tests pass?
- Add new tests for the new behavior. If there are no tests yet, add at least one or two for the main path and an edge case.
If tests fail, investigate and fix before merging. Tests are your safety net; skipping them defeats the purpose of review.
5. Match your project's style and conventions
AI does not know your project's conventions. To verify:
- Naming: Do variable, function, and file names match your patterns?
- File layout: Is the code in the right module or folder?
- Patterns: Does the code follow your architectural patterns (e.g., hooks, services, utilities)?
- Comments: Remove placeholder or generic comments. Add meaningful ones where needed.
- Linting and formatting: Run your linter and formatter. Fix any issues.
Treat AI output as a draft that you polish to match your standards. Consistent style makes the codebase easier to maintain.
Common AI mistakes to watch for
Here are patterns that AI assistants often get wrong:
| Mistake | Example | How to catch |
|---|---|---|
| Wrong API name | array.flat() vs array.flatten() |
Docs lookup, IDE autocomplete |
| Deprecated method | Using componentWillMount in React |
Linter warnings, docs |
| Missing null check | Accessing user.name without checking user |
Manual review, tests |
| Unhandled promise | Async function without await or .catch() |
Linter, runtime errors |
| Hardcoded secrets | API key in source code | Security scan, manual review |
| Off-by-one errors | Loop bounds, array indexing | Unit tests, edge case testing |
If you see these patterns in AI output, fix them before merging.
When to be extra careful
Some areas require extra scrutiny:
- Security-sensitive code: Authentication, authorization, input validation, encryption.
- Financial or legal code: Payment processing, compliance, data retention.
- Performance-critical code: Hot paths, database queries, memory-intensive operations.
- External integrations: API calls, webhooks, third-party services.
For these areas, consider pair review or getting a second opinion. AI mistakes in critical code can have serious consequences.
Building a review habit
Make review a standard part of your workflow:
- Do not merge immediately. Let AI output sit while you review.
- Use a checklist. Keep this or your own checklist handy.
- Add tests. Even one or two tests per feature catches regressions.
- Trust but verify. AI is helpful, but you are responsible for the code you ship.
For more on using AI effectively, see AI-assisted coding: practical tips. For deciding when to use AI at all, see When to use AI vs write it yourself.
