I had been building many tools with AI. The kind of weekend projects that used to take me a few days of focused work. The AI gave me working prototypes in an hour. I felt triumphant. Then I tried to add a feature. The code fought back. Every change I made broke something else, and the AI's fixes introduced new entanglements. A dozen prompts in, I was staring at a codebase I no longer understood.
In a well-engineered project, change should become easier over time. Why is the opposite happening with AI-assisted coding?
When we talk about AI coding, we usually talk about velocity. But we rarely talk about the topology of the effort. How does the shape of our struggle change over time?
The Paradox
After months of observing my own AI-assisted projects and watching others navigate theirs, I saw a pattern. There's an inverse relationship between how easy a project feels at the start and whether it survives to the finish line.
The industry is currently seduced by transactional coding. "Build me an app." "Fix this bug." "Add this feature." It feels like magic because the effort curve starts flat. You get working code almost immediately, and for a while, the changes flow easily.
But this approach is a trap. I diagnosed the end state with Kessler Syndrome years ago. Just as space junk begets more space junk until the entire orbital shell becomes unusable, AI-generated shortcuts beget more shortcuts until the entire codebase becomes unmaintainable.
Effort Over Time
There are two ways to use AI, and they produce opposite effort curves.
| Top-down approach | Bottom-up approach | |
|---|---|---|
| Start | Easy | Hard |
| End | Hard | Easy |
| Effort trend | Exponential growth | Flatten |
The first curve starts flat and ends vertical. You give the AI a high-level goal, and it gives you a solution. It's fast. Intoxicating even. But as you iterate, each change requires more context, more explanation, more correction. The effort climbs until progress halts entirely. I think of this as the mortgage curve. You're making deals with your future self, borrowing time you'll eventually have to repay with interest.
The second curve starts steep and ends flat. You fight the AI to build isolated primitives. Small, single-purpose pieces that don't know about each other. It feels slow and pedantic at first. You're not building an app; you're building the building blocks for an app. But once those primitives exist, complexity flatlines. New features become simple compositions of existing pieces. This is the bottom-up curve, and it requires a kind of discipline that runs counter to everything AI makes easy.
Why Easy Fails
One of my favorite talks is Simple Made Easy, by Rich Hickey.
He would argue that "easy" often leads to "complex", while "simple" often starts "hard". Consider the definitions:
| Term | Definition | Characteristic |
|---|---|---|
| Simple | One fold, one braid, or one twist; not interleaved. | Lack of entanglement/interleaving |
| Complex | Braided or folded together; interleaved. | Entanglement/interleaving |
| Easy | To lie near; familiar, at hand, or near one's capabilities. | Familiarity, accessibility, capability |
| Hard | Not near; unfamiliar, not at hand, or outside one's capabilities. | Unfamiliarity, inaccessibility, lack of capability |
Part of the problem is how we frame software development itself. We treat it as a one-shot effort rather than a process that unfolds over time. AI training reinforces this bias. Models learn from static snapshots of code, not from the messy reality of evolving business requirements, shifting priorities, and the slow accretion of edge cases that define real-world software.
When you ask an AI to solve a problem top-down, it starts "easy" as it generates the code that "just works". As the code evolves, it becomes interleaved with the process in which you arrived at the final solution. It patches logic based on immediate constraints. It carries the scar tissue of every "no, not like that" instruction you gave it. The code eventually represents the history of your struggle to articulate what you wanted.
As the project grows, these accumulated compromises interact in ways neither you nor the AI anticipated. When you try to change one thing, the tolerance stack-up causes the system to collapse under its complexity. The debris of previous shortcuts creates a minefield where any new movement triggers a cascade.
The conceptual model that treats coding as a transformation of idea into logic is best described as a hylomorphic process. In his paper, The textility of making, Tim Ingold argues that a better conceptual model for making should account for the process of change and the interaction between maker and material. Top-down AI coding automates the process and disintermediates the interaction to the point where the maker is no longer in touch with the material.
Future-driven development
The alternative is to treat code not as a sum of decisions, but as a collection of not-yet-decided possibilities, each existing independent of how you arrived at them and yet showing a path to how they might be used in the future.
I call this future-driven development.
In this model, you force the AI to build single-purpose components from the ground up by realizing a specific subset of possibilities. You strip away the "how" and focus entirely on the "what." The isolated concern of each computation. A function that parses dates. A module that handles authentication. Pieces that don't know about each other, that can be understood in isolation, that can be recombined without fear.
This requires a "hard" initial climb. Similar to mise en place. You have to prepare the ingredients before you assemble the dish. But unlike cooking, the coding process should never end. You have to resist the AI's eager offers to wire everything together as a fossilized solution. Exercising such discipline rewards you with locality of behavior. You can open any file and understand it without reading the whole codebase. When requirements change, you won't modify the system like unbraiding yarn. Instead, you metabolize the old and grow the new. The creative process becomes "simple" over time because every change is free of the baggage from the past. Every change becomes a green field project. Every component wants to settle into a new home.
Misalignment
I think the root of this problem is a misalignment between what LLMs naturally want to do and what sustainable software requires.
AI is teleological. Goal-obsessed. It wants to bridge the gap between your intent and working output as fast as possible. It naturally gravitates toward integration. Coupling everything together to make the immediate request work. This is what makes it feel magical.
But software is ontological. Structure-obsessed. To be maintainable, it needs isolation. Things must be decoupled so they don't break each other. The AI doesn't care about this because it doesn't have to maintain your code next week. You do.
What I cannot create, I do not understand.
— Richard Feynman
Feynman's words haunt me here. When AI creates code I cannot fully trace, when the solution arrives faster than my comprehension, we lose the capacity to care for what we've made.
Making without caring erodes our purpose. What would a chef, teacher, or parent be if their identity isn't grounded in the responsibility of care? A programmer who doesn't care for their creation is a soulless operator of symbols. A job that AI will gladly take over.
The Escape
In the famous marshmallow experiment, children were offered a choice. One marshmallow now, or two marshmallows if they could wait fifteen minutes. The ability to delay gratification predicted all sorts of life outcomes. The finding has been debated over the years but the theory is relevant here and now. We're facing a similar test in how we use AI.
The AI offers us the instant gratification of a working feature. If we take it, we get the sugar rush of progress, followed by the crash of Kessler Syndrome. To succeed, we have to delay gratification. We have to reject the "fully integrated solution" that works for our specific case. We have to insist on the boring, difficult work of not only building primitives, but maintaining their possibilities for future use as well as our understanding of their potentials. Only then will we be rewarded with the lasting satisfaction of a simple system that endures.