Lately I’ve been spending time intentionally pushing AI into parts of a system that I know are not clean.
Not to see if it works.
To see where it breaks.
Because that’s where the truth is.
When everything is well structured, AI looks incredible. It moves fast. It produces clean output. It feels like you’re multiplying your effort.
But that’s not the real test.
The real test is what happens when the system isn’t perfect.
When boundaries are unclear.
When responsibilities overlap.
When things have grown over time instead of being designed end to end.
That’s where AI gets interesting.
Not because it fixes it.
Because it exposes it.
You start to see hesitation.
You start to see guesses.
You start to see it follow paths that almost make sense but don’t quite hold together.
And if you’re paying attention, that tells you something important.
It’s not struggling with code.
It’s struggling with the shape of the system.
That’s a useful signal.
It’s like bringing in a really capable contractor and watching where they slow down. They’re not the problem. They’re showing you where the structure isn’t obvious.
I’ve been leaning into that.
Using AI less like a tool to “fix things” and more like a way to surface where understanding breaks down.
Because once you can see that clearly, you can actually do something about it.
AI is great at accelerating clean systems.
But what I’m finding is that it’s even better at revealing where things aren’t as clean as you thought.
And that’s where the real work starts. ☕

