Back to all posts

Microservices Are Making Engineers Dumber

Microservices architecture diagram

A pragmatic look at why the industry's favorite architecture pattern might be solving problems you don't have, while creating ones you definitely didn't need.


The Hype Cycle

In the 2010s, Netflix and Amazon told the world they used microservices. The industry went wild. Microservices became synonymous with "serious engineering." Monoliths became "legacy." Almost an insult.

Then the backlash came. Prominent engineers started admitting they'd moved back to monoliths and everything got better. Amazon Prime Video published a case study about consolidating microservices and cutting costs by 90%.

So what's actually going on?


What Problem Does Microservices Actually Solve?

Here's the honest answer: it's mostly an organizational problem, not a technical one.

At Netflix/Amazon/Google scale, you have hundreds of engineers across dozens of teams who can't all coordinate on one codebase without stepping on each other. Different teams need different release schedules. Too much code for any one person to understand.

Microservices let Team A deploy on Tuesday without waiting for Team B. Clear ownership boundaries. Less merge conflict hell.

That's the real win. Everything else is retrofitted justification.


Debunking the Technical Arguments

"We Need Independent Scaling"

The pitch: Your search function gets 100x more traffic than your settings page. With microservices, you scale just search.

The reality: You can scale infrastructure selectively without breaking up your codebase.

And here's the math that gets ignored: if you have 100% compute and one feature demands more, you scale the whole thing. The "waste" on other features is trivial compared to the engineering cost of maintaining distributed infrastructure.

Modern infrastructure supports smarter vertical scaling too. Instead of spinning up 1000 tiny 512MB containers, run 100 fat 4GB instances. Fewer network hops, fewer orchestration headaches, fewer points of failure.

Compute is cheap. Engineering time is expensive. Distributed systems debugging is really expensive.

"We Need Blast Radius Isolation"

The pitch: If one service crashes, the others keep running.

The reality: A well-structured monolith with proper error handling, circuit breakers, and multiple instances behind a load balancer has pretty good fault isolation already.

Meanwhile, microservices introduce new blast radius problems. One service gets slow, downstream services queue up, timeouts cascade, and suddenly everything is drowning together anyway.

"We Need Faster Builds and Deploys"

The pitch: Our monolith takes an hour to build. Microservices let us ship faster.

The reality: Modern build tools (Gradle, Bazel) do incremental compilation. Distributed build caching shares artifacts across CI runs. A modular monolith can build only what changed.

But what about testing? Don't you need full end-to-end tests for every change?

No. You don't. And here's the uncomfortable truth: microservices don't solve this problem, they hide it.

With microservices, you deploy your one service fast, but you still need E2E tests across services to catch integration breaks. Except now those tests are harder to write and flakier. So teams often just skip them. Ship and pray.

The actual solution is better test architecture:


The Modular Monolith: The Boring Right Answer

What if you could have:

That's the modular monolith. One deployable thing, but internally structured like separate concerns.

On deploy: run unit tests for the changed module, run contract tests to verify interfaces, ship it. Full E2E saved for critical paths.

The catch? Microservices enforce boundaries through network walls. Monoliths require discipline. But if your team respects module boundaries, you get 80% of the organizational benefit with 20% of the complexity.


The Real Cost: Engineers Who Can't Think in Systems

Here's the thing that bothers me most.

In a monolith, you can grep the codebase. You can trace a request from entry to database and back. You might not understand all of it, but you can if you want to. The knowledge is accessible.

In microservices, half the system is behind network calls to services you don't own, can't see, and might not even have repo access to. You learn to treat everything as black boxes.

What this produces:

The counterargument is that at true scale, no single person can understand everything anyway. Specialization is inevitable.

Sure. But microservices accelerate and enforce that fragmentation prematurely. A 50-person company doesn't need Netflix's isolation. They just imported Netflix's problems without Netflix's traffic.

The deeper loss: engineers who grew up only touching one service never learn to think in systems. That's a skill atrophying across the industry.


The Bottom Line

Microservices solve one real problem: "We have 500 engineers and they can't all work on one codebase without chaos."

If that's not your problem, you're probably importing complexity for no reason.

The technical justifications (scaling, blast radius, deploy speed) are solvable without fragmenting your system. The tooling exists. The patterns exist. They're just less fashionable.

Meanwhile, the industry keeps producing engineers who've never traced a request through a complete system, never debugged without distributed tracing tools, never understood how the pieces fit together.

Maybe that's fine. Or maybe we're trading away something important for architectural purity we didn't need.


This post emerged from a conversation where I tried to steelman microservices and kept finding the arguments hollow. Sometimes the boring answer is the right one.