Last week at the Red Hat Summit in San Francisco, CEO Jim Whitehurst called out containerization during his keynote as a stunning new development in open source, declaring that last year he “didn’t even know what Docker was”. Mr. Whitehurst was in good company no doubt, since nothing like Docker’s meteoric rise has been seen in open source in quite some time.
Building on it’s previously announced support for Docker in RHEL and OpenShift, Red Hat reaffirmed it’s commitment to Docker, announcing Project Atomic, an innovative new deployment management solution leveraging Docker containers (and competing with Andressen-Horowitz startup CoreOS). RedHat also announced geared, a command-line client and agent for linking Docker containers with systemd for OpenShift Origin.
Technology purists will be quick to point out that containerization is nothing new when it comes to virtualization. Indeed, OpenVZ, the predecessor to Linux Containers (LXC) has been in wide use since 2005. And Solaris, along with BSD derivatives, have supported their own containerization implementations for many years. But Docker’s popularity has less to do with technological advancement, and more to do with a fundamental shift in the operations model for IT. Docker’s tools for container management are arguably much more developer-friendly than previous containerization solutions, and those benefits could hardly come at a better time.
Infrastructure as a Service (IaaS) and Platform as a Service (PaaS) are both infrastructure paradigms that each promised to reduce the operational complexity of IT infrastructure. They did the opposite. Ask your neighborhood CIO or CTO, and you will find out that software developers are spending increasing amounts of their time managing software deployments instead of writing software. Add to that the increasing frequency of change driven by Agile development processes, and the new technology complexity introduced by IaaS and PaaS, and you have identified the pain that will (finally) drive the adoption of containers as a de-facto means of deploying software.
The adoption of a de-facto containerization standard will have widespread implications, and many positive impacts. Containerization has always had some obvious technical advantages, including better resource utilization, and control of guest operating systems. One of more nuanced benefits however has to do with abstraction of the operating system and its resource dependencies, from the application and its dependencies. This latter benefit helps to enable an idealized workflow called immutable infrastructure, where the state of the application configuration and dependencies is preserved from development, through testing, and on into production. Conceptually, immutable infrastructure is accomplished through deployment automation using introspection of the container itself, or through more traditional monolithic orchestration templates. Either way, the difference is that containers provide a clean separation of concerns between development and operations dependencies. In summary, changes are no longer made to production, changes are made to containers, and containers have a finite life-cycle that is optimized for developer productivity and operational simplicity.
A whole new generation of platforms is already emerging. Flynn.io is a new PaaS based on Docker that promises better productivity for developers, hackability, and optimized DevOps. Mitchell Hashimoto, the creator of Vagrant, has been promoting the immutable infrastructure concept for some time and is now developing Serf, a service discovery and orchestration solution that can be used with Docker. Finally, the aforementioned CoreOS has developed a lightweight Linux distribution that is Just Enough Operating System to manage Docker deployments. These solutions each tackle a touch unsolved problem, container orchestration and the creation of container-based services, albeit with very different approaches.