This is a repost from the Dell Entrepreneur Blog.
In the 10 or so startups I have helped build since 1995, one of the biggest challenges we always faced was the “scale problem”. Scale is a problem of success, a great problem you get to solve when you succeed at creating some initial value for your customers. If the measure of a software company’s ability to innovate is the velocity of software creation, then the measure of a web software company’ s ability to innovate includes getting that software through testing, integration, and successfully deployed into production. The scale problem can impact each of these functional areas differently and sometimes in surprising (and interdependent) ways. Often, the resolution of one scale problem simply reveals another previously unknown scale problem, leading to a seemingly unending list of issues and remediation activities.
Over the last 10 years creating value in web software applications, and scaling them up, has become a lot easier. The growth of agile software development practices have greatly accelerated software development, testing, and delivery. At the same time, the advent of cloud services like Amazon, Google, and Azure have reduced the time to acquire and deploy web infrastructure to near zero. Continuous integration and deployment has taken hold as an operations and architecture style, automating test and integration processes, and reducing the overall time between development and customer benefit. Today, software startups can build world-class web applications, scale them to millions of users, and ensure their availability for less money, time, and with employees than ever before.
These advances in scale have not been without tradeoffs, however, and a new set of problems has emerged around development and operations complexity. While the horizontal scaling pattern increases capacity along with the number of application server instances, the difficulty becomes managing the sheer number of application server deployments, along with their configuration and dependencies on other infrastructure and services. This also has severe implications for continuous integration and deployment processes, where functionality often has to be “stubbed out” owing to differences in the production and development environments. While further infrastructure abstraction in the form of platform-as-a-service can improve developer workflows and further reduce operations complexity, these issues are not simply negated by PaaS. As it turns out, many of the startups who initially adopted PaaS have eventually moved on to more traditional infrastructure models as they have scaled, owing to better economics, and increased flexibility to manage problems and make changes. These problems and complexities are at the root of why so many people are excited about containerization, and more specifically, Docker (the de-facto containerization standard for Linux).
One of the major benefits of Docker and containerization is that they enable an architectural style known as immutable infrastructure. In an immutable infrastructure, components of the infrastructure are never modified once deployed, but rather they are replaced by new components generated from a pristine state. For example, in the “old world” of 2008, an upgrade to the Java version across an infrastructure of 1000 servers would require:
- logging in to each server
- downloading the Java release
- installing Java
- restarting the application servers.
To deal with the scale challenge, these steps would be automated through scripting (ssh for loop), configuration management (Puppet, Chef), and other external systems. The problem of course being that this type of process works when it works, except when it doesn’t work, requiring teams to discover differences between systems and environments and recover on the fly.
In the new world of immutable application containers, applications are built in place with their dependencies and configuration. Wherever those containers are deployed, they run with the same dependencies and configuration. Containers are absent infrastructure dependencies, such as hostnames and IP addresses, which are injected at runtime when the container is deployed on a target host. This delineation provides a very clean separation of concerns between the application and the host infrastructure, and ultimately reduces the number of configuration endpoints by orders of magnitude (to just 1).
The benefits for software development are vast. Already, Docker (and others) are working on various new forms of service discovery, in order to solve the infrastructure dependency injection problem, and consequently the “awareness” of dependencies between application components on different servers and infrastructure. This means that in the future, development teams can focus on creating and maintaining micro services, another important scaling pattern for software development process. Micro services are discrete functionality packaged as network-available services, often through REST interfaces, which are completely independent of other services. By leveraging the immutable architecture style, integration testing becomes a matter of deploying the necessary micro services from their pristine state in the repository and executing the test harnesses against the complete system.
For operations, the immutable style provides a means to deploy major releases fractionally, testing them with small portions of the user base before rolling them out across the entire infrastructure. Containers themselves can be automatically introspected to derive dependencies on other containers, networks, storage, and other systems. Rolling upgrades backwards requires merely deploying the previous version of the container(s) in question and terminating the more recent version.
In the last few months there has been a relentless outpouring of new orchestration systems for Docker containers, including Kubernetes (Google), Mesos (Mesosphere), as well as PaaS capabilities from Flynn.io, Deis, and OpenShift (RedHat). As these new tools continue to emerge and mature, all of these developments will ultimately translate to a lot less software startup equity being spent on overhead and lot more software startup equity being spent on creating value through working software that scales.
That is a huge win for start-ups, their founders, their employees, their investors – and their customers.
Here at Dell Cloud Marketplace, we’re creating a new generation of tools to help IT and developer teams compare, consume, and control cloud services. Emerging technologies like Docker and containerization are a big part of what we’re building and we’re excited to showcase some of our progress in the forthcoming public beta of Dell Cloud Marketplace.