4 min read

Are We Entering the Era of Fragile Software?

AI allows us to ship faster than we can understand. We’re building more experiments, more tools, and more "good enough" solutions—but are we also building a house of cards? A look at the rising economics of fragility in the age of AI-generated code.

In the previous article, I explored how AI is changing the economics of building software.

It’s becoming cheaper. Faster. Easier to experiment. That leads to more software. More ideas. More “good enough” solutions.

But it also raises a question:

If we optimize for speed and quantity… are we also accepting more fragility?

When “Good Enough” Becomes Permanent

A lot of software doesn’t start as “production-ready.”

It starts as:

  • A quick prototype
  • A proof of concept
  • A small internal tool

Something to test an idea.

With AI, getting to that first version is easier than ever. What used to take days now takes hours. So we build more of these “first versions.”

And sometimes… they stay.

That quick API becomes part of a workflow. That internal tool gets shared across teams. That prototype quietly becomes production.

Not because it was designed to last. But because it worked.

“Good enough” has a way of becoming permanent...

Fragility Isn’t Always Visible

Fragile software doesn’t always look broken.

In fact, most of the time, it works. The UI looks clean. The API responds. The tests pass.

Until something changes.

A new edge case.
More load.
A slightly different use case.

And then things start to crack.

With AI in the loop, this becomes harder to see.

You generate code that looks correct. It integrates. It runs. It passes basic checks.

But you might not fully understand:

  • Why it works
  • Where it might fail
  • What assumptions are baked in

It’s like conducting a piece where parts of the orchestra are playing from sheet music you’ve never seen.

Everything sounds fine—until it doesn’t...

Why This Is Happening

This shift isn’t just technical, it’s economic and organizational.

If AI allows teams to move faster, expectations change:

  • Teams ship more with the same number of developers
  • Or teams get smaller, and are expected to maintain the same velocity

In both cases, something has to give.

Often, that “something” is depth.

Less time spent thinking through edge cases.
Less time building solid foundations.
More reliance on generated code and quick iterations.

Fragility isn’t an accident. It’s a side effect of the system we’re optimizing for.

In a way, this isn’t entirely new. We’ve seen similar trade-offs with outsourcing—moving faster or cheaper, but sometimes losing deep ownership of the system.

And while we reduce the cost of human effort, we’re often shifting that cost elsewhere—into compute, infrastructure, and energy that we don’t directly see.

Not All Fragility Is Bad

It’s important to be precise here. Fragility isn’t always a problem. Some software is meant to be temporary:

  • Internal tools
  • One-off scripts
  • Early-stage experiments

In those cases, speed matters more than durability. You don’t need a full symphony for a quick jam session.

The problem isn’t fragile software. The problem is fragile software in places where it shouldn’t be fragile.

Where Fragility Becomes Dangerous

There are systems where “good enough” isn’t good enough. In these contexts, fragility has real consequences.

  • The Silent Data Leak: An AI generates a database query that works perfectly in testing. But because it doesn’t account for specific multi-tenancy scoping, a user in Account A can suddenly see a summary of data from Account B. It isn't a "bug"—the code runs perfectly—it's a lack of architectural context.
  • The Infinite Loop (ReDoS): You ask for a regex to validate emails. The AI gives you one that passes every test case you have. But when a bot submits a string of 1,000 characters, the regex engine hits catastrophic backtracking, spiking your CPU to 100% and taking the service offline.
  • The "Hallucinated" Edge Case: An AI suggests a clever optimization for a payment retry logic. It looks elegant, but it fails to account for idempotency keys. A network hiccup happens, the retry triggers, and suddenly a customer is charged twice because the AI optimized for "clean code" over "distributed systems safety."

In these contexts, fragility has real consequences. And this is where AI can create a false sense of confidence.

It can generate solutions that look complete.
That pass initial tests.
That feel production-ready.

But without deep understanding, it’s easy to miss edge cases, failure modes, and long-term behavior.

The risk isn’t just bugs. It’s systems that fail in ways we didn’t anticipate—and don’t immediately understand.

The Role of the Developer

AI changes how we build.

It doesn’t remove responsibility for what we build. If anything, it increases it. Because now:

  • You can generate more than you can fully review
  • You can ship faster than you can fully understand
  • You can build systems without ever holding the full picture in your head

So the role shifts.

Less time writing every line. More time asking:

  • Do I understand this system?
  • Where can this break?
  • What happens under pressure?

The conductor doesn’t play every instrument.

But they are still responsible for how it sounds...

So… Are We Entering a Fragile Era?

Probably, yes.

We’re building faster.
We’re building more.
And not all of it is built to last.

But that’s only part of the story.

We’re also building:

  • More experiments
  • More useful tools
  • More opportunities to learn

Fragility is the trade-off. The question isn’t whether it exists. It’s whether we manage it intentionally.

Where do we accept it?
Where do we fight it?
Where do we redesign before it becomes a problem?

Understanding Matters More Than Perfection

Software doesn’t need to be perfect. But it does need to be understood.

And as we rely more on AI to generate what we build, that understanding becomes the difference between something that works today… and something that keeps working tomorrow.

That’s been the thread running through all of this.

Not whether AI is good or bad.
Not whether it makes us faster.

But what it changes about the way we build—and what that means for the systems we leave behind.

We’re not just writing code anymore.
We’re shaping how software is created.

And that responsibility doesn’t go away. If anything, it becomes more important.

Clicky