The Case for Monoliths: Why We Stopped Defaulting to Microservices
Confession: We Over-Engineered
A few years ago, every new project at Infinitiv started with the same architecture: a Kubernetes cluster running 8-12 microservices, an API gateway, a service mesh, distributed tracing, and a message queue. For a SaaS product with zero users.
We were solving imaginary problems while creating real ones. It took a few painful projects to learn the lesson, but we've fundamentally changed our approach.
The Hidden Costs of Microservices
Operational Complexity
Each microservice needs its own CI/CD pipeline, health checks, logging configuration, error handling, and monitoring. Multiply that by 10 services and you've spent weeks on infrastructure before writing a single line of business logic.
Distributed System Problems
The moment you split a monolith into services, you inherit an entirely new class of bugs: network partitions, eventual consistency, distributed transactions, service discovery failures, and cascade failures. These problems don't exist in a monolith.
Development Velocity
In a monolith, a feature that touches three domain areas is a single pull request. In a microservices architecture, it's three PRs across three repos, coordinated deployments, and integration testing across service boundaries. For a team of 3-8 engineers, this overhead is a real drag on velocity.
Debugging Difficulty
"The request worked in staging but fails in production" is annoying in a monolith. In a microservices architecture, that same bug might involve tracing a request through 5 services, checking network policies, and correlating logs from 3 different systems. The cognitive overhead is enormous.
When Monoliths Shine
A well-structured monolith is not a ball of mud. Modern frameworks like Next.js, Rails, and Django encourage good separation of concerns within a single deployable unit. You can have:
- Clear module boundaries with defined interfaces
- Independent domain logic organized by feature
- Shared infrastructure (auth, logging, database) without duplication
- Simple deployment: one artifact, one process, one set of logs
For projects with fewer than 50 engineers, a monolith is almost always the right starting point.
Our Current Approach
We now follow a simple heuristic:
- Start with a monolith, always
- Structure it well from day one with clear module boundaries
- Extract a service only when there's a concrete, measurable reason — not a hypothetical future need
- Valid reasons to extract: independent scaling requirements, different technology needs, separate team ownership, regulatory isolation
The services we most commonly extract first are: background job processing, real-time communication (WebSockets), and heavy computation tasks (image/video processing, ML inference).
The Modular Monolith
The sweet spot we've found is the "modular monolith" — a single deployable application where each domain module has strict boundaries, its own database schema (or schema namespace), and communicates with other modules through defined interfaces rather than direct database queries.
This gives you 90% of the benefits of microservices (loose coupling, clear ownership, independent evolution) with 10% of the operational complexity.
The Real Lesson
The lesson isn't "monoliths good, microservices bad." It's "start simple, add complexity only when justified." Every architectural decision should be driven by current, measured needs — not anticipated future requirements that may never materialize.
Build the simplest thing that works, measure where it breaks, and evolve from there. Your future self will thank you.