Skip to main content
Ganesh Joshi
Back to Blogs

Using AI for refactoring

February 13, 20265 min read
AI & coding
Code structure or refactoring on screen

Refactoring is a good fit for AI: the behavior should stay the same while structure improves. Use the assistant to propose extractions, renames, and modernization, then review and apply what makes sense.

Why AI works well for refactoring

Refactoring has clear patterns that AI can recognize and apply:

  • Extract function from a code block
  • Rename for clarity
  • Split a file into smaller modules
  • Convert to modern syntax
  • Add or improve types
  • Remove dead code

Unlike new feature development, refactoring has a clear success criterion: the code behaves exactly the same but is structured better. This makes AI suggestions easier to verify.

Start with a clear scope

Tell the model exactly what you want:

Vague request Clear request
"Clean up this file" "Extract the validation logic from this component into a separate function"
"Make this better" "Split this file into a hook and a presentational component"
"Refactor this" "Rename these variables to match our naming convention"

Narrow scope keeps the diff manageable and easier to review. Large, vague requests lead to large, hard-to-verify changes.

Ask for one kind of change at a time

Mixing multiple refactoring types in one request increases mistakes:

Separate these into different requests:

  • Renames (variable names, function names)
  • Extractions (pulling code into functions or files)
  • Syntax modernization (async/await, optional chaining)
  • Type improvements (adding or fixing types)
  • Test additions (adding test coverage)

This makes it easier to run tests after each step. If something breaks, you know which type of change caused it.

Example workflow

Step 1: Rename variables for clarity
[Review, run tests, commit]

Step 2: Extract validation into a separate function
[Review, run tests, commit]

Step 3: Convert callbacks to async/await
[Review, run tests, commit]

Step 4: Add TypeScript types
[Review, run tests, commit]

Each step is small, verified, and committed. Rolling back any step is easy.

Run tests before and after

This is critical for safe refactoring:

  1. Before starting: Have a green test run (or at least a known-good state)
  2. After each change: Run the same tests
  3. If something breaks: Fix or revert before continuing
npm test         # Green before refactoring
# Make AI-suggested change
npm test         # Green after change?
# If not, revert and investigate

Without tests, refactoring is risky whether you use AI or not. If the codebase lacks tests, consider adding tests for the areas you will refactor before starting.

Use AI to suggest, not to decide

AI can propose: "Extract this into a function called validateUserInput"

You decide:

  • Is that name right for our codebase?
  • Is this the right boundary for the function?
  • Should this go in a separate file?
  • Does this match our architecture patterns?

Reject or adjust suggestions that do not fit. You own the design; AI speeds up the typing.

Example interaction

AI: I suggest extracting lines 45-67 into a function called 
    `processOrderData` and moving it to utils/order.ts

You: Good extraction boundary, but:
     - Name it `transformOrderResponse` to match other transformers
     - Keep it in this file for now; we can move it later if reused
     - Add JSDoc explaining the input/output contract

AI: [Updated suggestion with your requirements]

Common refactoring patterns with AI

Extract function

Prompt: Extract lines 23-45 into a separate function. 
        The function should take [these inputs] and return [this output].
        Keep it in the same file.

Rename for clarity

Prompt: Rename `data` to `userProfile` and `cb` to `onComplete` 
        throughout this file. Update all usages.

Split file

Prompt: Split this file into two:
        - UserProfile.tsx: The presentational component (UI only)
        - useUserProfile.ts: The hook with data fetching logic
        
        Update imports accordingly.

Modernize syntax

Prompt: Convert these callback-style functions to async/await.
        Maintain the same error handling behavior.

Add types

Prompt: Add TypeScript types to this function. 
        Here is the data shape it receives: [paste example]
        Infer the return type from the implementation.

Remove dead code

Prompt: Identify unused functions and variables in this file.
        List them so I can verify before removing.

What to watch for

Even with AI help, watch for these issues:

Issue Why it happens How to catch
Changed behavior AI misunderstood the logic Run tests, manual verification
Lost edge cases AI simplified too much Review diff carefully, test edge cases
Wrong boundaries AI extracted at wrong point Check if the new structure makes sense
Naming mismatches AI does not know your conventions Review names against existing patterns
Missing imports AI forgot to update imports Linter/compiler errors

Modernize syntax with care

Syntax changes can alter behavior in edge cases:

  • async/await changes error propagation timing
  • Optional chaining (?.) changes behavior when value is 0 or ''
  • Nullish coalescing (??) differs from || for falsy values
  • Arrow functions change this binding

When modernizing syntax:

  1. Understand the difference between old and new patterns
  2. Run the full test suite
  3. Do a quick manual check of changed paths
  4. Review the diff for any behavioral changes

Handling large files

For very large files, refactor incrementally:

  1. Identify extraction targets: Ask AI "What are good candidates for extraction in this file?"
  2. Prioritize: Start with the largest or most reused blocks
  3. Extract one at a time: One function or module per step
  4. Test between each extraction: Ensure nothing breaks
  5. Stop when readable: Do not over-extract; aim for clarity

Summary

AI is effective for refactoring because:

  • Patterns are well-defined and recognizable
  • Success is measurable (behavior unchanged, structure improved)
  • Small, focused changes are easy to generate and verify

To refactor safely with AI:

  1. Start with a clear, narrow scope
  2. Ask for one type of change at a time
  3. Run tests before and after each change
  4. Use AI to suggest, you decide what to apply
  5. Modernize syntax carefully, watching for behavioral changes

For more on keeping generated code maintainable, see Keeping AI output maintainable. For review practices, see Reviewing AI-generated code.

Frequently Asked Questions

Yes. Refactoring is a good fit for AI because behavior should stay the same while structure improves. AI can suggest extractions, renames, and syntax modernization while you verify nothing breaks.

Have a green test run before starting. Ask for one type of change at a time. Run tests after each change. Review every suggestion before applying. Keep changes small and focused.

No. Let AI suggest implementations like 'extract this into a function called X,' but you decide if that name and boundary fit your codebase. You own the design; AI speeds up the typing.

Extracting functions, renaming for clarity, converting syntax (async/await, optional chaining), splitting large files, adding types, and removing dead code. These have clear patterns and are easy to verify.

Run tests before and after each change. Make one type of change at a time. Review diffs carefully. If something breaks, revert and retry with a smaller scope. Never refactor without tests.

Related Posts