Select Page

The year 2014 was as a very exciting period in cloud computing as several emerging technologies and trends began to shape the future of IT.  In late 2014 the overarching trend disrupting technology and industries everywhere is still the growth of software.  Software is now nearly ubiquitous due to the proliferation of general purpose processing power, low-cost personal computing devices, and the advent of universal connectivity through the Internet.  The biggest challenge still facing developers and IT operations teams today is the ability to build and scale complex distributed systems, and to continuously evolve those systems in response to rapidly changing customer and market needs.

Three major trends that are taking shape to meet this challenge are containerization, hybrid cloud, and converged infrastructure:

2015 is Year of the Whale

docker_monstroRemember way back when the world tried to pretend that VMWare was no big deal?  Yes, so do I.  It is a rare and enjoyable circumstance in IT when we get to see a single technology create a nearly universal upheaval.  Containerization is rapidly taking over as the de-facto mechanism for packaging and deploying distributed applications, creating widespread disruption to virtually every part of IT.  The excitement and popularity of Docker has to do with how the technology provides a consistent mechanism to build, deploy, and scale application components with isolated system resources. Docker neatly controls the “dependency hell matrix” of application dependencies and configurations within the container build time, enabling an immutable infrastructure pattern for continuous integration and deployment.

Docker disrupts Operating Systems 

The big news in late 2014 was the announcement by CoreOS, the minimalist operating system distribution designed to simplify container deployment at scale, that it will pursue is own containerization technology called Rocket.  It has been very clear for some time that Docker and CoreOS were on a collision course, but this announcement coinciding with DockerCon sent a clear message about current state of affairs: Docker and CoreOS are not friends anymore.  The two open source companies are competing for the same customers because, without an operations platform, Docker containers are ultimately just a developer tool.

Competition is always healthy for emerging technologies, and we now have a two-horse race.  The unquestionable leader in this race is Docker, who has by far the larger share of the open source development community, mindshare, and partnership traction.  This is a particularly difficult situation for CoreOS since Unlike Docker, which is effectively just software that can be packaged for any Linux distribution, CoreOS is a standalone operating system.  This means that, in addition to Docker, CoreOS also competes with industry giants like Microsoft, VMWare,  RedHat, and Ubuntu. Those four giant incumbents also just happen to be Docker’s key partners.

Meanwhile, Ubuntu and RedHat have formulated their first response by introducing some CoreOS-like capabilities in two new projects Ubuntu Core and Project Atomic, and Microsoft announced that future versions of Windows Server will support Docker containers natively.

Docker disrupts Platform as a Service

The bigger news in late 2014 was that Docker announced support for orchestration of multi-container distributed applications at DockerCon with the introduction of Docker Swarm and Docker Compose.  Docker Swarm is a clustering system for Docker-enabled hosts, providing basic discovery of hosts, and scheduling of Docker containers to hosts.  Docker Compose provides a simple YAML-based configuration language to describe and assemble multi-container distributed applications, and is clearly what became of Docker’s acquisition of Fig. This means the future is also quite uncertain for Platform as a Service.

Cloud Foundry, OpenShift, and ActiveState have all moved quickly to incorporate basic support for Docker containers.  PaaS platforms also have a lot of capability with security and multi-tenancy that are valuable to large businesses. But with its own native container orchestration, Docker can provide much of the developer-centric “git push” user experience for deploying distributed applications with less complexity than deploying a full blown PaaS solution.  That user experience is a prime motivator for companies that are exploring the PaaS option in the first place.

For right now, that simplicity still comes with tradeoffs in functionality, since Docker Swarm and Docker Compose are still alpha-quality software.

Docker disrupts Virtualization, Private Cloud, Configuration Management

For years now, virtual machine images have been often abused as the de-facto mechanism for packaging and deploying distributed applications.  From an IT operations perspective, virtual machines offer superb resource isolation, security, and stability.  But virtual machines are also full blown operating system instances that require full management, each with their own storage and network settings, dependencies, and environment-specific configuration.  Virtual machine images are large and unwieldy live filesystems (or a plurality of files these days).  Once deployed, virtual machines tend to “drift” from their original pristine state, as they are modified from time to time by software, scripts, and human beings to suit as-of-this-moment requirements.  Indeed, the need to manage an explosion in the number of virtual servers under management, the so-called  “vm sprawl” problem, has helped configuration management systems like Chef, Puppet, Ansible, and Salt become fundamental tools of the system administration trade.  It is also unclear where developer responsibility ends and system administrator responsibility begins with a virtual machine, frequently making troubleshooting an “all hands on deck” experience.

You have probably heard that containers and virtualization are perfectly complimentary – that is certainly true.  However, containers have less performance overhead, use resources more efficiently, and are faster to deploy than full blown virtual machines.  Containers have another very important advantage over virtual machines in that they are software-defined (they are created from meta-data that specifies their composition).  Together, these characteristics enabling a powerful pattern for managing distributed applications – immutable infrastructure.  In the immutable infrastructure paradigm, minimalist bare-metal operating systems that are automatically discovered, configured, and assigned container images through automation.  Containers are created and provisioned when needed, and destroyed when altered or no longer needed.  The delineation of responsibility between IT and development teams is crystal clear, IT runs the ships, and Developers run the containers. In 2015 we will begin to see many early adopters begin to standardize on fully automated container-to-metal architectures for their private cloud offerings.

Yet for all those advantages, IT will continue to struggle with management of virtual machines for the foreseeable future. There are a number of very good reasons, many of which have to do with legacy applications, and multi-tenancy.   For a very long time into the future, there will be (mostly legacy) applications that simply may not work well with containers, so it will still take years before containers can likely claim to be the dominant model.

Docker disrupts Infrastructure as a Service

Any cloud provider that has an appropriate base system image can host Docker containers, but cloud providers are moving quickly to make them easier to manage.   Amazon introduced Docker support in Elastic Beanstalk,  Google has Container Engine in the Google Cloud Platform, CenturyLink introduced Panamax, and Digital Ocean now has CoreOS as a base image.  Expect that in 2015 we will begin to see a whole new set of container-centric APIs and cloud services emerge in these providers.

Hybrid Clouds 


Hardware refresh cycles, coupled with the demand for more agility in IT services, is finally causing large businesses to take a serious look at hybrid cloud architectures.  Many businesses are continuing to operate with infrastructure that has long outlived its planned obsolescence, and virtually every IT leader I have spoken with in the last few months is looking to leverage public cloud as part of their overall IT strategy.  The bad news is that  IT teams remain constrained by legacy infrastructure, regulatory and compliance issues, and a long list of security concerns.  This is more or less the same set of barriers that has been stalling public cloud adoption in the enterprise for several years.

Add to that list the complexity of managing multiple clouds and vendor relationships, along with the difficulty of migrating workloads and data between clouds, and its easy to understand why businesses have taken their time to get to hybrid.   For hybrid cloud in particular, this complexity may ultimately motivate businesses to acquire cloud services through an intermediary like a cloud service brokerage or marketplace, which provide tools to manage multiple clouds in a single user experience.  Right now development and test use cases are still predominate for hybrid clouds, and that is likely to remain the case throughout 2015.




Automation is probably the mantra I have heard repeated most among IT professionals and leadership in the last half of 2014, and it has been hand in hand with the goal of enabling self-service for their users.  Businesses everywhere are still struggling to scale IT operations under constant budget pressure.  The only way to get more work done with ultimately fewer people is to automate.  Continuous integration and deployment is also a very common goal among the population of engineering and operations teams I have spoken with recently.  Along with Docker itself, there are some very nice tools like Shippable emerging to take advantage of containerization.

In 2015, I expect that we will see some of the existing automation frameworks like Puppet, Chef, Salt, and Ansible develop features to handle difficult tasks like bare-metal hardware orchestration (several of these already have some capabilities in this respect).  We call this getting the infrastructure to “ready-state”, which is the first moment the infrastructure is available to accept a workload after first-time hardware provisioning.

The problem with third-party automation tools is that they have great difficulty keeping pace with hardware vendors, who are constantly refactoring and shipping new products, hardware revisions, firmware updates, and management software.  Ultimately, it is most likely hardware vendors themselves that will have to deliver superior automation orchestration.

This brings us to our last major disruptive trend, converged infrastructure:

Converged Infrastructure


For virtually as long as IT has been supporting businesses, the primary functions of the IT organization was to evaluate, acquire, and integrate different hardware and software to create business systems.  And, since the dawn of the PC era, hardware vendor lock-in has been a primary design consideration.  As web-scale architectures have matured and become widely known, and with the advent of virtualization (containers included), the days of IT acting as its own hardware integrator may be coming to an end.  Virtualization of any kind means that IT teams can be less concerned about hardware vendor lock-in, since they can quickly re-deploy these systems without downtime. As discussed before, modern IT teams are becoming primarily concerned with creating and maintaining software.  The imperative to move quickly to respond to customer and market trends means there is less lead time than ever.  What if deploying infrastructure on-premise was no more complicated than assembling Lego blocks?

Converged infrastructure, or what Gartner calls “rack based computing”, is the inevitable culmination of many of the trends discussed herein.  Converged systems help ordinary businesses create web-scale infrastructures, delivering pre-integrated compute, network, storage, and virtualization.  For on-premise IT, as well as service providers, converged infrastructure is probably the most exciting development in a dozen years.  Back in 2007, I joined a hot young start-up called 3LeafSystems creating a converged network, storage, and compute fabric.  It is a great example of exactly how long these technology trends can take to actually come to fruition.

Today, every major hardware vendor has a converged line of businesses, and there are a number of start-ups doing very well in this space (like Nutanix).  In 2015, we can expect to see a lot of vendor activity in this area, as the next generation of these systems begin to come to market.

Moving Forward

No doubt about it, early-adopters of these technologies will have something of a wild ride in 2015 and beyond.  Even so, early investors are likely to have a significant ROI cases for companies seeking better agility and lower costs.  Are you an IT leader, and are these trends and others on your radar?  I would love to learn about it, so please drop me a note on LinkedIn.   I hope that you had an excellent holiday to start to your New Year, and wish you a very successful 2015.