Refactoring is a good fit for AI: the behavior should stay the same while structure improves. Use the assistant to propose extractions, renames, and modernization, then review and apply what makes sense.
Why AI works well for refactoring
Refactoring has clear patterns that AI can recognize and apply:
- Extract function from a code block
- Rename for clarity
- Split a file into smaller modules
- Convert to modern syntax
- Add or improve types
- Remove dead code
Unlike new feature development, refactoring has a clear success criterion: the code behaves exactly the same but is structured better. This makes AI suggestions easier to verify.
Start with a clear scope
Tell the model exactly what you want:
| Vague request | Clear request |
|---|---|
| "Clean up this file" | "Extract the validation logic from this component into a separate function" |
| "Make this better" | "Split this file into a hook and a presentational component" |
| "Refactor this" | "Rename these variables to match our naming convention" |
Narrow scope keeps the diff manageable and easier to review. Large, vague requests lead to large, hard-to-verify changes.
Ask for one kind of change at a time
Mixing multiple refactoring types in one request increases mistakes:
Separate these into different requests:
- Renames (variable names, function names)
- Extractions (pulling code into functions or files)
- Syntax modernization (async/await, optional chaining)
- Type improvements (adding or fixing types)
- Test additions (adding test coverage)
This makes it easier to run tests after each step. If something breaks, you know which type of change caused it.
Example workflow
Step 1: Rename variables for clarity
[Review, run tests, commit]
Step 2: Extract validation into a separate function
[Review, run tests, commit]
Step 3: Convert callbacks to async/await
[Review, run tests, commit]
Step 4: Add TypeScript types
[Review, run tests, commit]
Each step is small, verified, and committed. Rolling back any step is easy.
Run tests before and after
This is critical for safe refactoring:
- Before starting: Have a green test run (or at least a known-good state)
- After each change: Run the same tests
- If something breaks: Fix or revert before continuing
npm test # Green before refactoring
# Make AI-suggested change
npm test # Green after change?
# If not, revert and investigate
Without tests, refactoring is risky whether you use AI or not. If the codebase lacks tests, consider adding tests for the areas you will refactor before starting.
Use AI to suggest, not to decide
AI can propose: "Extract this into a function called validateUserInput"
You decide:
- Is that name right for our codebase?
- Is this the right boundary for the function?
- Should this go in a separate file?
- Does this match our architecture patterns?
Reject or adjust suggestions that do not fit. You own the design; AI speeds up the typing.
Example interaction
AI: I suggest extracting lines 45-67 into a function called
`processOrderData` and moving it to utils/order.ts
You: Good extraction boundary, but:
- Name it `transformOrderResponse` to match other transformers
- Keep it in this file for now; we can move it later if reused
- Add JSDoc explaining the input/output contract
AI: [Updated suggestion with your requirements]
Common refactoring patterns with AI
Extract function
Prompt: Extract lines 23-45 into a separate function.
The function should take [these inputs] and return [this output].
Keep it in the same file.
Rename for clarity
Prompt: Rename `data` to `userProfile` and `cb` to `onComplete`
throughout this file. Update all usages.
Split file
Prompt: Split this file into two:
- UserProfile.tsx: The presentational component (UI only)
- useUserProfile.ts: The hook with data fetching logic
Update imports accordingly.
Modernize syntax
Prompt: Convert these callback-style functions to async/await.
Maintain the same error handling behavior.
Add types
Prompt: Add TypeScript types to this function.
Here is the data shape it receives: [paste example]
Infer the return type from the implementation.
Remove dead code
Prompt: Identify unused functions and variables in this file.
List them so I can verify before removing.
What to watch for
Even with AI help, watch for these issues:
| Issue | Why it happens | How to catch |
|---|---|---|
| Changed behavior | AI misunderstood the logic | Run tests, manual verification |
| Lost edge cases | AI simplified too much | Review diff carefully, test edge cases |
| Wrong boundaries | AI extracted at wrong point | Check if the new structure makes sense |
| Naming mismatches | AI does not know your conventions | Review names against existing patterns |
| Missing imports | AI forgot to update imports | Linter/compiler errors |
Modernize syntax with care
Syntax changes can alter behavior in edge cases:
async/awaitchanges error propagation timing- Optional chaining (
?.) changes behavior when value is0or'' - Nullish coalescing (
??) differs from||for falsy values - Arrow functions change
thisbinding
When modernizing syntax:
- Understand the difference between old and new patterns
- Run the full test suite
- Do a quick manual check of changed paths
- Review the diff for any behavioral changes
Handling large files
For very large files, refactor incrementally:
- Identify extraction targets: Ask AI "What are good candidates for extraction in this file?"
- Prioritize: Start with the largest or most reused blocks
- Extract one at a time: One function or module per step
- Test between each extraction: Ensure nothing breaks
- Stop when readable: Do not over-extract; aim for clarity
Summary
AI is effective for refactoring because:
- Patterns are well-defined and recognizable
- Success is measurable (behavior unchanged, structure improved)
- Small, focused changes are easy to generate and verify
To refactor safely with AI:
- Start with a clear, narrow scope
- Ask for one type of change at a time
- Run tests before and after each change
- Use AI to suggest, you decide what to apply
- Modernize syntax carefully, watching for behavioral changes
For more on keeping generated code maintainable, see Keeping AI output maintainable. For review practices, see Reviewing AI-generated code.
