Tools: Open Source Why Explaining Your AI Corrections Makes Them Stick

Tools: Open Source Why Explaining Your AI Corrections Makes Them Stick

Posted on Feb 14

• Originally published at itnext.io

"Rename this to processUserData because we use process[Entity]Data pattern for all transformation functions."

That single word, "because," reduced how often I correct AI in a session.

Not across sessions. AI doesn't remember those (I will discuss this in next article). But within a single session, adding "because" to my corrections changed how AI applied them.

AI tools are designed to act on available information rather than ask clarifying questions. That's what makes them fast and fluid. You don't get peppered with "Did you mean X or Y?" every time you ask for something.

The tradeoff: AI guesses. Sometimes the guess is wrong, and you correct it.

But here's what I missed at first: when you correct AI, it guesses again. It has to infer what you meant by the correction.

Without knowing which, AI picks whichever seems most likely based on its training.

LLM is fundamentally a pattern-matching system, but it matches against patterns from its training data, not your specific codebase. Without more context, AI defaults to what it's seen most often across millions of codebases.

If you say "rename to processUserData," AI applies the change where you pointed. When you ask it to create the next transformation function, it might name it handleOrderInfo or convertProductData, matching common patterns from its training, not your local convention.

Source: Dev.to