Select Page
Automating application deployments across clouds with Salt and Docker

Automating application deployments across clouds with Salt and Docker

saltstack-logoIf you have not had the chance to work with Salt yet, it is a very exciting new configuration management system that is easy to get up and running, powerful enough to support distributed command execution and complex configuration management, and scalable enough to support thousands of servers simultaneously.

Recently, I wrote about how a de facto containerization standard will enable a whole new generation of management tools.  Back in January, SaltStack announced several awesome new features in the Salt 2014.1.0 (Hydrogen) release, including support for the life-cycle management of Docker containers.  SaltStack is very early in getting Docker support, and Docker itself still does not consider itself production-ready (tell that to Yelp and Spotify), but together these tools offer an out-of-the box solution for getting started with immutable infrastructure.

The deployment and management of an application across multiple virtual machines, using multiple public clouds, is a use case that would have been considered categorically “hard” just a year ago.  Companies like Google, Facebook, and Ning spent many years developing this kind of orchestration technology internally in order to deal with their scale challenges. Today, using Docker containers, together with Salt-Cloud, and a few Salt States, we can do this from scratch in a few tens-of-minutes of effort.  And, because we are using Salt’s declarative configuration management, we can scale this pattern to actually operate our production environment.

Use Case


The core use case is one or more application containers which we want to deploy on one or more virtual machines, using one or more public cloud providers.

For the sake of simplicity, we will restrict this use case to:

  • Assume some familiarity with Salt
  • Assume some familiarity with Docker
  • Assume that you have a Salt master already installed
  • Assume that you want to do this on a single public cloud, using Digital Ocean (since adding new clouds to Salt-Cloud is dead simple)
  • Simulate a real-world application with a dummy apache service

Demo Use Case


In order to simulate a real-world application, will create a Docker container with the Apache web server.  Conceptually, this container could be a front-end proxy, a middle tier web service, a database, or virtually any other type of service we might need to deploy in our production application. To do that, we simply create a Dockerfile in a directory in the normal way, build the container, and push the container to the Docker repository.

Step 1: Create a Dockerfile

[email protected]:/some/dir/apache# cat Dockerfile
# A basic apache server. To use either add or bind mount content under /var/www
FROM ubuntu:12.04

MAINTAINER Kimbro Staken version: 0.1

RUN apt-get update && apt-get install -y apache2 && apt-get clean && rm -rf /var/lib/apt/lists/*

ENV APACHE_LOG_DIR /var/log/apache2


CMD ["/usr/sbin/apache2", "-D", "FOREGROUND"]

Step 2: Build the container

[email protected]:/some/dir/apache# docker build -t jthomason/apache .
Uploading context  2.56 kB
Uploading context
Step 0 : FROM ubuntu:12.04
 ---> 1edb91fcb5b5
Step 1 : MAINTAINER Kimbro Staken version: 0.1
 ---> Using cache
 ---> 534b8974c22c
Step 2 : RUN apt-get update && apt-get install -y apache2 && apt-get clean && rm -rf /var/lib/apt/lists/*
 ---> Using cache
 ---> 7d24f67a5573
Successfully built 527ad6962e09

Step 3: Push the container

[email protected]:/tmp/apache# docker push jthomason/apache
The push refers to a repository [jthomason/apache] (len: 1)
Pushing tag for rev [527ad6962e09] on {}

With our demo application successfully pushed to the Docker registry, we are now ready to proceed with orchestrating its deployment.  As mentioned previously, we assume you have a Salt master installed somewhere. If not, you’ll need to follow the documentation to get a Salt master installed.  The next step then is to configure Salt-Cloud for your choice of public cloud provider.  Configuring Salt-Cloud is simple.  We need to create an SSH key pair that Salt-Cloud will use to install the Salt Minion on newly created VMs, add that keypair to our choice of public cloud provider, and create a Salt-Cloud configuration file with the API credentials for our public cloud.

Step 4: Create an SSH Key Pair

[email protected]:/etc/salt/keys# ssh-keygen -t dsa
Generating public/private dsa key pair.
Enter file in which to save the key (/root/.ssh/id_dsa): digital-ocean-salt-cloud
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in id_dsa.
Your public key has been saved in
The key fingerprint is:
06:8f:6f:e1:97:5a:5a:48:ce:09:f3:b6:33:42:48:9a [email protected]
The key's randomart image is:
+--[ DSA 1024]----+
|                 |
|                 |
|      .          |
|    .  +         |
|   + .+ S        |
|  E . [email protected] + .     |
|     .  @ =      |
|      .ooB       |
|       .+o       |
[email protected]:/etc/salt/keys#

Step 5: Upload SSH Key Pair


With Digital Ocean now enabled with our SSH keys, next steps before provisioning time are to configure salt-cloud with Digital Ocean’s API key for our account, and define the profiles for virtual machine sizes, geographic locations, and images for Digital Ocean.  The salt-cloud configuration for credentials are kept on the salt-master in /etc/salt/cloud.providers.d/, while the profiles for each public cloud are kept in /etc/salt/cloud.profiles.d/.  See the salt-cloud documentation for more details on configuration options.

Step 6: Configure Salt-Cloud

  provider: digital_ocean
  # Digital Ocean account keys
  client_key: <YOUR KEY HERE>
  api_key: <YOUR API KEY HERE>
  # Directory & file name on your Salt master
  ssh_key_file: /etc/salt/keys/digital-ocean-salt-cloud
# Official distro images available for Arch, CentOS, Debian, Fedora, Ubuntu

  provider: do
  image: Ubuntu 12.04.4 x64
  size: 512MB
#  script: Optional Deploy Script Argument
  location: San Francisco 1
  script: curl-bootstrap-git
  private_networking: True

  provider: do
  image: Ubuntu 12.04.4 x64
  size: 1GB
#  script: Optional Deploy Script Argument
  location: New York 2
  script: curl-bootstrap-git
  private_networking: True

  provider: do
  image: Ubuntu 12.04.4 x64
  size: 2GB
#  script: Optional Deploy Script Argument
  location: New York 2
  script: curl-bootstrap-git
  private_networking: True

It is now time to configure Salt to provision our application container.  To do that, we need to create two Salt states, one to provision Docker on a newly created VM, and another to provision the application container.  Salt states are Salt’s declarative configuration states, which are executed on target hosts by the salt-minion.  States are an incredibly rich feature of Salt, one that could hardly be covered with any sufficient level of detail in a tutorial like this.  This example is not a particularly smart or optimal use of states, but it is simple. You’ll want to read up on Salt states to develop the best practices for your environment.

Step 7: Create a Salt state for Docker

    - name: python-apt

    - name: python-pip

    - name: docker-py
    - repo: git+
    - require:
      - pkg: docker-python-pip

    - pkgs:
      - iptables
      - ca-certificates
      - lxc

      - repo: 'deb docker main'
      - file: '/etc/apt/sources.list.d/docker.list'
      - key_url: salt://docker/docker.pgp
      - require_in:
          - pkg: lxc-docker
      - require:
        - pkg: docker-python-apt
      - require:
        - pkg: docker-python-pip

    - require:
      - pkg: docker-dependencies


This first salt state defines the dependencies and configuration for installing Docker on

Step 8: Create Salt state for the application container

     - name: jthomason/apache
     - require_in: apache-container

     - name: apache
     - hostname: apache
     - image: jthomason/apache
     - require_in: apache

     - container: apache
     - port_bindings:
                HostIp: ""
                HostPort: "80"

Now that configuration is complete, we are now ready to provision 1..n virtual machines, each with a running instance of our application container.  Before we do that, let us first verify that the Salt master is actually working. We know there is at least one agent that should be talking to this salt-master at this point, which is the agent running on the salt-master itself.

Step 9: Verify that Salt is working

[email protected]:~# salt '*'
[email protected]:~#

Satisfied that everything is in working order with the salt installation, we can now provision our first virtual machine with an instance of our container using salt-cloud.

Step 10: Provision a VM with an instance of the container

[email protected]:# salt-cloud --profile ubuntu_512MB_sf1
[INFO    ] salt-cloud starting
[INFO    ] Creating Cloud VM
[INFO    ] Rendering deploy script: /usr/lib/python2.7/dist-packages/salt/cloud/deploy/

After the salt-cloud run completes, it emits a YAML blob containing information about the newly created VM instance.  Let’s use the IP address of the instance to see if our application is running

Step 11: Verify application is running 


Great success!

We have established the basic setup and management pattern for our infrastructure.  Adding additional public clouds is easy, thanks to salt-cloud, providing a single control interface for our entire application infrastructure.   But where to go from here?    A starting point is to consider how salt states can be used to manage VM and container life-cycles, in the context of the overall continuous integration and deployment process.   I plan to share some of my thoughts on that specifically in a future post.  Obviously, there is a lot of thought to be given to your specific objectives, since that will ultimately determine the deployment and operations architecture for your application.  However, Salt is an incredibly powerful tool that, when combined with Docker, provides a declarative framework for managing the application life-cycle in the immutable infrastructure paradigm right out of the box.  That versatility puts a whole lot of miles behind you, allowing you to focus on other core challenges with application deployment and operations.


Why Docker and Containerization is a key enabling technology for PaaS

Why Docker and Containerization is a key enabling technology for PaaS

docker-logoLast week at the Red Hat Summit in San Francisco, CEO Jim Whitehurst called out containerization during his keynote as a stunning new development in open source, declaring that last year he “didn’t even know what Docker was”.  Mr. Whitehurst was in good company no doubt, since nothing like Docker’s meteoric rise has been seen in open source in quite some time.

Building on it’s previously announced support for Docker in RHEL and OpenShift, Red Hat reaffirmed it’s commitment to Docker, announcing Project Atomic, an innovative new deployment management solution leveraging Docker containers (and competing with Andressen-Horowitz startup CoreOS).  RedHat also announced geared, a command-line client and agent for linking Docker containers with systemd for OpenShift Origin.

Technology purists will be quick to point out that containerization is nothing new when it comes to virtualization.  Indeed, OpenVZ, the predecessor to Linux Containers (LXC) has been in wide use since 2005.  And Solaris, along with BSD derivatives, have supported their own containerization implementations for many years. But Docker’s popularity has less to do with technological advancement, and more to do with a fundamental shift in the operations model for IT. Docker’s tools for container management are arguably much more developer-friendly than previous containerization solutions, and those benefits could hardly come at a better time.

Infrastructure as a Service (IaaS) and Platform as a Service (PaaS) are both infrastructure paradigms that each promised to reduce the operational complexity of IT infrastructure.  They did the opposite.  Ask your neighborhood CIO or CTO, and you will find out that software developers are spending increasing amounts of their time managing software deployments instead of writing software.   Add to that the increasing frequency of change driven by Agile development processes, and the new technology complexity introduced by IaaS and PaaS, and you have identified the pain that will (finally) drive the adoption of containers as a de-facto means of deploying software.


The adoption of a de-facto containerization standard will have widespread implications, and many positive impacts. Containerization has always had some obvious technical advantages, including better resource utilization, and control of guest operating systems.  One of more nuanced benefits however has to do with abstraction of the operating system and its resource dependencies, from the application and its dependencies. This latter benefit helps to enable an idealized workflow called immutable infrastructure, where the state of the application configuration and dependencies is preserved from development, through testing, and on into production. Conceptually, immutable infrastructure is accomplished through deployment automation using introspection of the container itself, or through more traditional monolithic orchestration templates. Either way, the  difference is that containers provide a clean separation of concerns between development and operations dependencies. In summary, changes are no longer made to production, changes are made to containers, and containers have a finite life-cycle that is optimized for developer productivity and operational simplicity.


A whole new generation of platforms is already emerging. is a new PaaS based on Docker that promises better productivity for developers, hackability, and optimized DevOps.  Mitchell Hashimoto, the creator of Vagrant, has been promoting the immutable infrastructure concept for some time and is now developing Serf, a service discovery and orchestration solution that can be used with Docker.   Finally, the aforementioned CoreOS has developed a lightweight Linux distribution that is Just Enough Operating System to manage Docker deployments.  These solutions each tackle a touch unsolved problem, container orchestration and the creation of container-based services, albeit with very different approaches.