Are Monolithic Software Applications Doomed for Extinction?

Over the last few years, Dev9 has advocated for large, legacy software platforms (monoliths) to be broken up into a series of smaller applications (microservices). Some with monoliths experience agree with our philosophy, while others believe microservices are just hype and keep their monoliths chugging along. With a well-functioning monolith, it is easy to believe that breaking up the legacy application to create microservices presents more obstacles than benefits. As Martin Fowler points out in his microservices article, the jury is still out on whether microservices represent the future. However, it is clear to Dev9 that monoliths do not fit nicely in the current world of flexible infrastructure.

The Advantages of Monoliths

Monolithic applications are a natural way for an application to evolve. Most applications start out with a single objective, or a small number of related objectives. Over time, features are added to the application to support business needs.

The most obvious place to put new functionality is in the existing application. There are several reasons for this:

  1. Communication costs. The cost between communicating components is near zero when code is in the same application stack. Meaning, developers do not need to think about, or code around, things like networks and availability.
  2. Reusability. If the problem is similar, code from existing applications can be reused with little effort. Code reuse at this scale is a natural thing to do when developing, and can decrease time-to-market for application features.
  3. Effort. Starting new applications has always involved significant effort. You must create a new artifact with all the configuration, all the build scripting and tooling, and an entirely new set of hardware and network configuration (sometimes, a herculean effort). To make it more complex, code reuse is now more complicated across projects.

Monolithic applications provide teams with a single source code tree to work from. All changes accessible by any part of the application, allowing for single teams to work closely on the application. In server side applications, monoliths have been, and continue to be, the standard for the last 20 years. So, why is a change necessary?

Problems with Monoliths

Unfortunately, monoliths are imperfect in many ways. Monolithic applications have been a standard for long enough to recognize common issues that keep arising in several areas: scaling, managing change, tight coupling, and architectural divergence.

Lack of Scalability

Scaling is impacted by monolithic architectures in several ways:

Vertical Scaling:
Vertical scaling refers to the practice of adding bigger hardware to support larger loads on systems. Because databases were not inherently scalable in other ways, vertical scaling became a common approach to scaling databases. It is also usually the first brute force attempt to scale. Unfortunately, scaling a monolithic application using vertical scaling means more complexity because the application must manage competing priorities of its many pieces.

Breaking a monolith down to microservices does not change the nature of vertical scaling. It is still throwing hardware resources at the problem of scale. However, by their nature, microservices present smaller portions of the overall system and each can be sized appropriately to the task it performs. Smaller compute for smaller demand services, and larger for higher demand services.

Horizontal Scaling:
Horizontal scaling refers to the practice of adding multiple of the same application to handle increased load. In the age of the Cloud, this is by far the most common form of scaling. While many monoliths can scale this way (assuming they are built for it), it is very inefficient. The problem comes from the fact that increased loads are usually only tied to a portion of the features that comprise an application. Horizontal scaling brings the entire application with it. In practice, this means that you should scale to more computing power than needed for the portion of the application that is limited because the computing power must run the entire application stack.

Microservices are built to naturally scale horizontally. They don’t carry additional features that must be scaled with them. Instead each portion of a system that might need to scale can be scaled independently by adding more instances of the service. In addition to this, microservices create network boundaries that are perfect spots for architectural solutions to scaling such as queues and circuit breakers.

Team Scalability:
There is another aspect to scaling a team that can impact monolithic applications: scaling teams up to increase the speed at which features are added to an application. Monoliths require a lot of communication for multiple teams of developers to work on. This overhead slows down the pace of development. While there must be cross-application communication in more distributed systems, the teams each have an application or service that they maintain. Scaling up more teams can be done on each independent service.

Microservices present smaller, interconnected parts that are easier to work with for an individual team, with less collaboration required on the code. A microservices environment also offers a much easier ramp for new developers on smaller applications. There is simply less to learn.

Change is Difficult

A variation of the problem with scaling occurs with application change. Any change to any system in a monolithic application requires that the entire application be redeployed. For example, if an application contains both an order processing component and an email communication component, then even simple changes to the email communication will force the whole application to be rolled out again. In the worst case, this can mean downtime for every deployment. In every case, it means that changes must be managed as a group delaying fixes or features to ensure they have a minimal impact.

This is not the case with microservices as they are built around single features. While there is still risk to downtime when components are updated, the architecture can be built to minimize impacts using things like queues. Deployments of components should also be done by replacing one service server at a time, thus allowing services that depend on it to continue functioning.


Coupling refers to how dependent one piece of code is on another piece of code. In an ideal application, everything is done through clear interfaces that hide any implementation from the system that depends on what the interface provides. While this problem is largely a design consideration, monoliths make it easy to access all of the code used to build the application. In the worst cases, the code becomes a spaghetti that is nearly impossible to maintain. Even in the best cases, tight coupling means that systems can interact in unexpected ways because the dependencies between components become blurry.

Coupling can still be a problem in microservices. They do have two advantages though. First, code is only shared in an explicit fashion with libraries. This means that changes will always come with updates to libraries. Secondly, they can only be built with an explicit external interface that clearly changes when updates are made. It is easy to track unexpected coupling issues with clean interfaces.

Architectural Divergence

Most applications will continue to accrue features over their lifetime. This is a good thing. Business changes and the needs that the application fulfills often change. Unfortunately, over time, most monolithic applications have different maintainers and architects. In addition to this, current, state-of-the art software development practices frequently change. For example, I have seen older applications with three distinct architectural patterns in the same application. Needless to say, working in this environment requires a lot of context switching and tribal knowledge. It can get bad enough that, like Ravenholm, there are areas of code that programmers don’t know anymore.

Architectural divergence is an expected aspect of microservices. They are built so that each service can be its own simple design. In larger organizations this can include everything from framework to language. This doesn’t protect against bad code in a single service, but it does reduce the impact to just that service. They can be rewritten more easily.

A New Age

The desire to break down monolithic applications has been driven by two primary trends. First, dynamic infrastructure (computing resources that can be created and moved with little effort) makes it much easier to spin up new applications. This includes traditional virtual machines such as VMWare and every piece of Cloud computing. Additionally, small, easily-moved virtualization components called containers have made it easy for teams to build very small components that fit together across the network. This eliminates much of the cost of setting up infrastructure new applications.

The second trend is “microframeworks.” Microframeworks are simple application stacks that are easy to create and don’t have the traditional weight of a new application. They can easily be sized from a single service to a set of related services and don’t require the same overhead to spin up.

Another emerging trend is pushing this concept of microframeworks to a logical extreme with serverless computing. In this model, each service endpoint has its own code that is managed by a Cloud controller.


The drive to break down monoliths can come from scaling problems, code maintenance issues, or the desire to take advantages of dynamic and managed infrastructure such as in provided by the Cloud.

Whatever the driver, the smaller components offered by microservices are easier to develop and maintain for programmers, and the tools to help operations manage this growing number of applications are rapidly maturing. In order to be prepared for the future, it is critical to replace or reduce the monolithic applications that are at the heart of many companies.