Select Page
2015 Year of the Whale and Other Disruptive Trends in IT

2015 Year of the Whale and Other Disruptive Trends in IT

The year 2014 was as a very exciting period in cloud computing as several emerging technologies and trends began to shape the future of IT.  In late 2014 the overarching trend disrupting technology and industries everywhere is still the growth of software.  Software is now nearly ubiquitous due to the proliferation of general purpose processing power, low-cost personal computing devices, and the advent of universal connectivity through the Internet.  The biggest challenge still facing developers and IT operations teams today is the ability to build and scale complex distributed systems, and to continuously evolve those systems in response to rapidly changing customer and market needs.

Three major trends that are taking shape to meet this challenge are containerization, hybrid cloud, and converged infrastructure:

2015 is Year of the Whale

docker_monstroRemember way back when the world tried to pretend that VMWare was no big deal?  Yes, so do I.  It is a rare and enjoyable circumstance in IT when we get to see a single technology create a nearly universal upheaval.  Containerization is rapidly taking over as the de-facto mechanism for packaging and deploying distributed applications, creating widespread disruption to virtually every part of IT.  The excitement and popularity of Docker has to do with how the technology provides a consistent mechanism to build, deploy, and scale application components with isolated system resources. Docker neatly controls the “dependency hell matrix” of application dependencies and configurations within the container build time, enabling an immutable infrastructure pattern for continuous integration and deployment.

Docker disrupts Operating Systems 

The big news in late 2014 was the announcement by CoreOS, the minimalist operating system distribution designed to simplify container deployment at scale, that it will pursue is own containerization technology called Rocket.  It has been very clear for some time that Docker and CoreOS were on a collision course, but this announcement coinciding with DockerCon sent a clear message about current state of affairs: Docker and CoreOS are not friends anymore.  The two open source companies are competing for the same customers because, without an operations platform, Docker containers are ultimately just a developer tool.

Competition is always healthy for emerging technologies, and we now have a two-horse race.  The unquestionable leader in this race is Docker, who has by far the larger share of the open source development community, mindshare, and partnership traction.  This is a particularly difficult situation for CoreOS since Unlike Docker, which is effectively just software that can be packaged for any Linux distribution, CoreOS is a standalone operating system.  This means that, in addition to Docker, CoreOS also competes with industry giants like Microsoft, VMWare,  RedHat, and Ubuntu. Those four giant incumbents also just happen to be Docker’s key partners.

Meanwhile, Ubuntu and RedHat have formulated their first response by introducing some CoreOS-like capabilities in two new projects Ubuntu Core and Project Atomic, and Microsoft announced that future versions of Windows Server will support Docker containers natively.

Docker disrupts Platform as a Service

The bigger news in late 2014 was that Docker announced support for orchestration of multi-container distributed applications at DockerCon with the introduction of Docker Swarm and Docker Compose.  Docker Swarm is a clustering system for Docker-enabled hosts, providing basic discovery of hosts, and scheduling of Docker containers to hosts.  Docker Compose provides a simple YAML-based configuration language to describe and assemble multi-container distributed applications, and is clearly what became of Docker’s acquisition of Fig. This means the future is also quite uncertain for Platform as a Service.

Cloud Foundry, OpenShift, and ActiveState have all moved quickly to incorporate basic support for Docker containers.  PaaS platforms also have a lot of capability with security and multi-tenancy that are valuable to large businesses. But with its own native container orchestration, Docker can provide much of the developer-centric “git push” user experience for deploying distributed applications with less complexity than deploying a full blown PaaS solution.  That user experience is a prime motivator for companies that are exploring the PaaS option in the first place.

For right now, that simplicity still comes with tradeoffs in functionality, since Docker Swarm and Docker Compose are still alpha-quality software.

Docker disrupts Virtualization, Private Cloud, Configuration Management

For years now, virtual machine images have been often abused as the de-facto mechanism for packaging and deploying distributed applications.  From an IT operations perspective, virtual machines offer superb resource isolation, security, and stability.  But virtual machines are also full blown operating system instances that require full management, each with their own storage and network settings, dependencies, and environment-specific configuration.  Virtual machine images are large and unwieldy live filesystems (or a plurality of files these days).  Once deployed, virtual machines tend to “drift” from their original pristine state, as they are modified from time to time by software, scripts, and human beings to suit as-of-this-moment requirements.  Indeed, the need to manage an explosion in the number of virtual servers under management, the so-called  “vm sprawl” problem, has helped configuration management systems like Chef, Puppet, Ansible, and Salt become fundamental tools of the system administration trade.  It is also unclear where developer responsibility ends and system administrator responsibility begins with a virtual machine, frequently making troubleshooting an “all hands on deck” experience.

You have probably heard that containers and virtualization are perfectly complimentary – that is certainly true.  However, containers have less performance overhead, use resources more efficiently, and are faster to deploy than full blown virtual machines.  Containers have another very important advantage over virtual machines in that they are software-defined (they are created from meta-data that specifies their composition).  Together, these characteristics enabling a powerful pattern for managing distributed applications – immutable infrastructure.  In the immutable infrastructure paradigm, minimalist bare-metal operating systems that are automatically discovered, configured, and assigned container images through automation.  Containers are created and provisioned when needed, and destroyed when altered or no longer needed.  The delineation of responsibility between IT and development teams is crystal clear, IT runs the ships, and Developers run the containers. In 2015 we will begin to see many early adopters begin to standardize on fully automated container-to-metal architectures for their private cloud offerings.

Yet for all those advantages, IT will continue to struggle with management of virtual machines for the foreseeable future. There are a number of very good reasons, many of which have to do with legacy applications, and multi-tenancy.   For a very long time into the future, there will be (mostly legacy) applications that simply may not work well with containers, so it will still take years before containers can likely claim to be the dominant model.

Docker disrupts Infrastructure as a Service

Any cloud provider that has an appropriate base system image can host Docker containers, but cloud providers are moving quickly to make them easier to manage.   Amazon introduced Docker support in Elastic Beanstalk,  Google has Container Engine in the Google Cloud Platform, CenturyLink introduced Panamax, and Digital Ocean now has CoreOS as a base image.  Expect that in 2015 we will begin to see a whole new set of container-centric APIs and cloud services emerge in these providers.

Hybrid Clouds 

hybrid_clouds

Hardware refresh cycles, coupled with the demand for more agility in IT services, is finally causing large businesses to take a serious look at hybrid cloud architectures.  Many businesses are continuing to operate with infrastructure that has long outlived its planned obsolescence, and virtually every IT leader I have spoken with in the last few months is looking to leverage public cloud as part of their overall IT strategy.  The bad news is that  IT teams remain constrained by legacy infrastructure, regulatory and compliance issues, and a long list of security concerns.  This is more or less the same set of barriers that has been stalling public cloud adoption in the enterprise for several years.

Add to that list the complexity of managing multiple clouds and vendor relationships, along with the difficulty of migrating workloads and data between clouds, and its easy to understand why businesses have taken their time to get to hybrid.   For hybrid cloud in particular, this complexity may ultimately motivate businesses to acquire cloud services through an intermediary like a cloud service brokerage or marketplace, which provide tools to manage multiple clouds in a single user experience.  Right now development and test use cases are still predominate for hybrid clouds, and that is likely to remain the case throughout 2015.

Automation

 

robot

Automation is probably the mantra I have heard repeated most among IT professionals and leadership in the last half of 2014, and it has been hand in hand with the goal of enabling self-service for their users.  Businesses everywhere are still struggling to scale IT operations under constant budget pressure.  The only way to get more work done with ultimately fewer people is to automate.  Continuous integration and deployment is also a very common goal among the population of engineering and operations teams I have spoken with recently.  Along with Docker itself, there are some very nice tools like Shippable emerging to take advantage of containerization.

In 2015, I expect that we will see some of the existing automation frameworks like Puppet, Chef, Salt, and Ansible develop features to handle difficult tasks like bare-metal hardware orchestration (several of these already have some capabilities in this respect).  We call this getting the infrastructure to “ready-state”, which is the first moment the infrastructure is available to accept a workload after first-time hardware provisioning.

The problem with third-party automation tools is that they have great difficulty keeping pace with hardware vendors, who are constantly refactoring and shipping new products, hardware revisions, firmware updates, and management software.  Ultimately, it is most likely hardware vendors themselves that will have to deliver superior automation orchestration.

This brings us to our last major disruptive trend, converged infrastructure:

Converged Infrastructure

converged

For virtually as long as IT has been supporting businesses, the primary functions of the IT organization was to evaluate, acquire, and integrate different hardware and software to create business systems.  And, since the dawn of the PC era, hardware vendor lock-in has been a primary design consideration.  As web-scale architectures have matured and become widely known, and with the advent of virtualization (containers included), the days of IT acting as its own hardware integrator may be coming to an end.  Virtualization of any kind means that IT teams can be less concerned about hardware vendor lock-in, since they can quickly re-deploy these systems without downtime. As discussed before, modern IT teams are becoming primarily concerned with creating and maintaining software.  The imperative to move quickly to respond to customer and market trends means there is less lead time than ever.  What if deploying infrastructure on-premise was no more complicated than assembling Lego blocks?

Converged infrastructure, or what Gartner calls “rack based computing”, is the inevitable culmination of many of the trends discussed herein.  Converged systems help ordinary businesses create web-scale infrastructures, delivering pre-integrated compute, network, storage, and virtualization.  For on-premise IT, as well as service providers, converged infrastructure is probably the most exciting development in a dozen years.  Back in 2007, I joined a hot young start-up called 3LeafSystems creating a converged network, storage, and compute fabric.  It is a great example of exactly how long these technology trends can take to actually come to fruition.

Today, every major hardware vendor has a converged line of businesses, and there are a number of start-ups doing very well in this space (like Nutanix).  In 2015, we can expect to see a lot of vendor activity in this area, as the next generation of these systems begin to come to market.

Moving Forward

No doubt about it, early-adopters of these technologies will have something of a wild ride in 2015 and beyond.  Even so, early investors are likely to have a significant ROI cases for companies seeking better agility and lower costs.  Are you an IT leader, and are these trends and others on your radar?  I would love to learn about it, so please drop me a note on LinkedIn.   I hope that you had an excellent holiday to start to your New Year, and wish you a very successful 2015.

 

 

 

 

A PaaSing Trend: Why enterprises will finally adopt PaaS in 2015

A PaaSing Trend: Why enterprises will finally adopt PaaS in 2015

platformIf there is one universal truth in  IT, it is that businesses are always slower to adopt new technology than expected.  Way back in March of 2011, Gartner declared that it would be the year of Platform as a Service (PaaS), predicting that by 2015 most enterprises would have part of their critical software leveraging PaaS directly or indirectly.  More recently in January of 2014, 451 Research published a report predicting that lackluster PaaS adoption may be an indication that standalone PaaS services may simply be consolidated into existing Infrastructure as a Service (IaaS) offerings.

For most businesses I meet, accelerating the application development life-cycle, and simplifying all operations processes remains a key challenge.  In fact, the majority of customers I have spoken with in the last six months since 451’s research either have active projects to develop their own internal PaaS offering, or intend to start such projects within the year.  It may be small sample bias, but talk to enough businesses and users, and I think that patterns begin to emerge.  In summary, I believe we are seeing what may be the very beginning of a larger shift (finally) toward enterprise IT developing PaaS-like capabilities of their own.

Yet in the end, 451’s prediction may not be too far from the truth, as new technologies enable existing IaaS solutions to become more PaaS-like.  I have written previously on the adoption of containerization and the resulting consequences for PaaS and IaaS.  Since then, the activity in the Docker ecosystem has continued to accelerate.  Google announced Kubernetes for running containers on the Google Cloud Platform, Amazon announced support for Docker containers in AWS Elastic Beanstalk, while Red Hat and Ubuntu both announced support for Docker.  The takeaway then is that vendors are moving very quickly to enhance their products and services with native support for containers, and as a result PaaS-like features will start to emerge as a standard part of IaaS offerings.

Then there is the issue of operations. For  years now, people like me have been drawing charts like the one below, which identified the trend toward platform and utility computing. The utopian theory goes that application developers want to focus strictly on creating their core business logic, and leave the messy work of hosting and scaling the application to… no one at all.  In the future, ever increasing layers of automation and abstraction eliminate the necessity of operations (and operations teams) completely, ultimately leading to the unicorn-infested utopia of NoOps.

it-evolution

The problem with this chart, and with this trend, has always been this NoOps aspect.  What we saw recently with the Great Code Spaces Debacle of 2014, and as Trevor Pott described so succinctly, is what most IT operations and sysadmins understand intuitively: cloud services are not a magic pill to eliminate operations issues.  Instead, they create a number of new challenges for operations, such as data replication and migration, disaster recovery, security, and governance, that can all have dire consequences when ignored.  These challenges are significant barriers to wholesale PaaS adoption, and some of the issues Trevor Pott identifies (like lock-in) are reasons PaaS adoption has been stalled versus the earlier rose-colored forecasts.

Mitigating these risks requires more traditional systems management techniques, yet those techniques are precisely what has been driving developers toward PaaS in the first place.   Containerization, and its emerging ecosystem, will result in a new set of tools that lead to “open” PaaS architectures.  Just like the autopilot in a commercial aircraft, in the “Open PaaS” world, operations teams will be able to disengage the automation to analyze, troubleshoot, and correct infrastructure issues in real time.   This means that PaaS solutions like OpenShift and Cloud Foundry, or their successors, will have to do a lot more to create tools that offer administrators command, control, and logging.  OpenShift is a step in the right direction, and SaltStack already provides a lot of this tooling right out of the box.  When PaaS solutions start to look a lot more like open architectures, we will see enterprises moving quickly to enable their development teams, creating PaaS-like features for developers without PaaS-like lock-in.

 

Why Docker and Containerization is a key enabling technology for PaaS

Why Docker and Containerization is a key enabling technology for PaaS

docker-logoLast week at the Red Hat Summit in San Francisco, CEO Jim Whitehurst called out containerization during his keynote as a stunning new development in open source, declaring that last year he “didn’t even know what Docker was”.  Mr. Whitehurst was in good company no doubt, since nothing like Docker’s meteoric rise has been seen in open source in quite some time.

Building on it’s previously announced support for Docker in RHEL and OpenShift, Red Hat reaffirmed it’s commitment to Docker, announcing Project Atomic, an innovative new deployment management solution leveraging Docker containers (and competing with Andressen-Horowitz startup CoreOS).  RedHat also announced geared, a command-line client and agent for linking Docker containers with systemd for OpenShift Origin.

Technology purists will be quick to point out that containerization is nothing new when it comes to virtualization.  Indeed, OpenVZ, the predecessor to Linux Containers (LXC) has been in wide use since 2005.  And Solaris, along with BSD derivatives, have supported their own containerization implementations for many years. But Docker’s popularity has less to do with technological advancement, and more to do with a fundamental shift in the operations model for IT. Docker’s tools for container management are arguably much more developer-friendly than previous containerization solutions, and those benefits could hardly come at a better time.

Infrastructure as a Service (IaaS) and Platform as a Service (PaaS) are both infrastructure paradigms that each promised to reduce the operational complexity of IT infrastructure.  They did the opposite.  Ask your neighborhood CIO or CTO, and you will find out that software developers are spending increasing amounts of their time managing software deployments instead of writing software.   Add to that the increasing frequency of change driven by Agile development processes, and the new technology complexity introduced by IaaS and PaaS, and you have identified the pain that will (finally) drive the adoption of containers as a de-facto means of deploying software.

devops-container

The adoption of a de-facto containerization standard will have widespread implications, and many positive impacts. Containerization has always had some obvious technical advantages, including better resource utilization, and control of guest operating systems.  One of more nuanced benefits however has to do with abstraction of the operating system and its resource dependencies, from the application and its dependencies. This latter benefit helps to enable an idealized workflow called immutable infrastructure, where the state of the application configuration and dependencies is preserved from development, through testing, and on into production. Conceptually, immutable infrastructure is accomplished through deployment automation using introspection of the container itself, or through more traditional monolithic orchestration templates. Either way, the  difference is that containers provide a clean separation of concerns between development and operations dependencies. In summary, changes are no longer made to production, changes are made to containers, and containers have a finite life-cycle that is optimized for developer productivity and operational simplicity.

container-workfow

A whole new generation of platforms is already emerging.  Flynn.io is a new PaaS based on Docker that promises better productivity for developers, hackability, and optimized DevOps.  Mitchell Hashimoto, the creator of Vagrant, has been promoting the immutable infrastructure concept for some time and is now developing Serf, a service discovery and orchestration solution that can be used with Docker.   Finally, the aforementioned CoreOS has developed a lightweight Linux distribution that is Just Enough Operating System to manage Docker deployments.  These solutions each tackle a touch unsolved problem, container orchestration and the creation of container-based services, albeit with very different approaches.