Skip to content →

Continuous Deployment of a Dockerized Rails Application

I believe continuous deployment is a worthy goal for every development team. The practice requires discipline and minimizes the risks associated with releasing large swaths of new code into the wild. I agree with the mantra that deployments shouldn’t be a big deal; that they should be easy to roll back from, and require little ceremony.

continuous deployment guy

This tutorial is the last in a series of three that demonstrates how to use Docker in development, continuous integration, and continuous deployment. Follow all three and you’ll have a system where you can push code to master, watch your tests run in CI,  and then have your app deployed to production. All within a matter of minutes, all within the same containerized environment.


This tutorial demands a lot. First, you need a Rails application that can be built with Docker Compose. If you don’t have one then please see the first article in the series, “Docker for an Existing Rails Application“. Second, your Rails app should be setup for continuous integration (preferably with CircleCI). That is the topic of the second article in the series, “Continuous Integration of a Dockerized Rails Application“. This article builds off the other two so it assumes you are working with systems similar to what they outline. If you need a webapp to work with, or just want to get your hands on the code from all three tutorials, then checkout the sample application on GitHub.

Since we’re doing continuous deployment you’ll also need a server to deploy to. For me that’s a VPS on Digital Ocean that is built from a “one click-app”. Lastly, you need a place to push and pull Docker images (a Docker registry). I use Docker Hub. You can use whatever provider you like for these services; in my experience the ones mentioned here work well.

Continuous deployment with Docker on CircleCI

To get started overwrite your circle.yml with the one below. Sections that differ from the ones in the continuous integration article are commented. Explanations of those sections are offered after the file.

The deploy tag

In order to uniquely identify the build we define DEPLOY_TAG as an environment variable.

An example deploy tag is 75_db4b44b. The number to the left of the underscore is the CircleCI-assigned build number. The characters to the right of the underscore are the first seven of the Git SHA1 commit under test. The CIRCLE_* environment variables used to create the deploy tag are provided by CircleCI during each build.

The deploy tag is important. It’s what we use to tag our Docker images and it’s what the deploy script uses to pull the correct images to the server. Most important, it’s what Docker Compose uses to properly start containers.

Compiling assets

In production we want the web server (Nginx) to serve our compiled assets, not the application server (Unicorn). Both servers, however, need to be aware of the same compiled asset set. That’s because the Rails application (run by Unicorn) needs the asset pipeline-generated manifest (e.g. public/assets/.sprockets-manifest-*.json) to correctly output asset URLs, and Nginx needs the actual compiled assets to serve. When Unicorn and Nginx run in the same container this is a non-issue because they can share the same public/assets directory. Sharing becomes difficult, however, when the servers run in different containers (our case). After experimenting with data volumes, I found that the easiest way to share a set of assets between containers is to simply compile them once at image build time and copy them into both images. That is what is happening in the dependencies section of circle.yml.

First we install our application’s gems on the CI server so we can run rake. Then we compile our assets so they are output to the public/assets directory. Finally we build our Docker images. As seen previously, our Dockerfile and Dockerfile-nginx both copy the public directory into the built image. With each image having its own copy of the same compiled assets everything works as expected; Nginx can properly respond to requests for the fingerprinted-asset URLs output by Unicorn.

The deployment section

The deployment:  section of circle.yml will not execute unless all tests pass. There can be multiple named sections under deployment: . The one run for the branch under test depends on the  branch: directive. This flexibility allows you to have different deploy configurations for different environments (staging, production, etc.). In our case we’re only concerned with the master branch.

The commands: section is where we define our deploy sequence. First we tag the images that were built for testing with the DEPLOY_TAG. Next we push the tagged images to our Docker registry. Finally we run the deploy script, a rake task that gets the freshly pushed images up and running on our server. Most of this should be obvious, but there are a few things worth mentioning:

  • $(docker images | grep dockerexample_app | awk '{ print $3 }') is a shell expression that parses the output of docker images  to get the image ID.
  • reponame/dockerexample_app:$DEPLOY_TAG is the full name of your tagged image. “reponame” should be changed to the name of the repository in your registry that you are pushing to.
  • docker login -e $DOCKER_EMAIL -u $DOCKER_USER -p $DOCKER_PASS logs you into the registry so you can push. The DOCKER_* environment variables should be defined under your CircleCI project settings.

Deploying Docker images with Rake and SSHKit

The last step under under commands:  is to run rake docker:deploy . The docker:deploy task is defined in a file named lib/tasks/deploy.rake. Create that file now and paste in the following:

These are plain ol’ Rake tasks written with the DSL provided by the SSHKit gem. If you’ve ever worked with Capistrano then it probably looks familiar. That’s because SSHKit is part of the Capistrano project; it’s the gem that gives the ability to execute commands on remote servers.

Required gems

In order to run the script you’ll need to add the following to your application’s Gemfile:

Environment variables

Below is a table of all the environment variables used by the deploy script. The source column shows where I set the value for each. You could easily use a different source such as a .env file or arguments to the rake docker:deploy  command.

variable purpose source
DEPLOY_TAG controls which images are managed circle.yml
DEPLOY_ENV sets the RAILS_ENV for the deployment. Defaults to ‘production’. default
SERVER_HOST the domain name or IP of the remote server the script should deploy to CircleCI settings
SERVER_PORT the SSH port number on SERVER_HOST CircleCI settings
SERVER_USER the user to SSH into as on SERVER_HOST CircleCI settings
DEPLOY_PATH where the deploy script commands should be run. Defaults to /home/$SERVER_USER. default
DOCKER_USER username at the Docker registry CircleCI settings
DOCKER_EMAIL email of DOCKER_USER at the Docker registry CircleCI settings
DOCKER_PASS password of DOCKER_USER CircleCI settings

Why not use Capistrano?

In essence Capistrano is SSHKit plus a workflow for managing deploys. It is possible to customize the workflow to deploy Docker images but in my experience you end up redefining most of the useful workflow tasks and disabling many others. Overall I felt like I was fighting the workflow instead of leveraging it to meet my needs. I think that’s because Capistrano’s workflow is meant for traditional deployments where a copy of your application is cloned to a remote server and the server provides the application’s runtime environment. It’s not built for dockerized apps whose only server requirement is a running Docker Engine. Docker image deployments are much simpler, and using SSHKit + Rake instead of full-blown Capistrano reflects this fact.

Docker Compose for production

The deploy:configs task is relied upon by most of the other tasks in deploy.rake. It copies one file to the server, config/containers/docker-compose.production.yml. Create that file now with the following:

Every task in deploy.rake that runs docker-compose uses this configuration file. It’s just like the docker-compose.yml we created previously, but with two key differences:

  1. We don’t specify a build:  for any of the containers. Instead we specify the tagged image: that we built in CI, pushed to the registry, and pulled to the server.
  2. We persist everything under the log directory in a volume so that logs aren’t lost between deploys.

The first difference is what makes our continuous deployment pipeline work. It’s everything we’ve been working towards. The second difference will help preserve files that can be useful for troubleshooting problems in production. Finally, remember the use of the env_file:  directive under app: .  As explained before .env is used to store configuration information for your application. The file was first created in your development environment where it was .gitignore’d. In CI it is generated by the dependencies/override section of circle.yml. For production you will need to create this file by hand in the $DEPLOY_PATH of your server. For security be sure to set permissions on the file with chmod 600 .env . If you are not using a .env file you can remove the env_file:  directive from docker-compose.production.yml.

Rolling back a deploy

At this point in the tutorial you should have continuous deployment from CI up and running successfully. That’s awesome, but what happens when something goes wrong? In that case you’ll likely want to revert to a previously deployed image. Luckily that’s easy to do with these steps:

  1. SSH into your server
  2. Export useful variables
  3. Get a list of all known images, minus the last one deployed, with  docker images | grep -v $DEPLOY_TAG. You’ll see output like this:
  4. Copy the tag you want to roll back to (most likely the top TAG. You can confirm by looking at the age of the image in the CREATED column)
  5. Stop the running application with  docker-compose -f $DC_FILE stop
  6. [OPTIONAL] Rollback migrations with  docker-compose -f $DC_FILE run app rake db:rollback STEP=X , where “X” is the number of migrations you need to rollback
  7. Run the old version of your application with  DEPLOY_TAG=79_e44de36 docker-compose -f $DC_FILE up -d. Make sure DEPLOY_TAG is set to whatever you copied in step 4.


Software applications are a reflection of the teams that create them. Like any team, company, or person applications are constantly breaking down, being repaired, having new things added, and old things removed. For the humans involved this is a continual process, not something that occurs on a set schedule every few weeks. Software applications should work the same way, and that vision becomes reality when continuous integration is followed by continuous deployment.

Got questions or feedback? I want it. Drop your thoughts in the comments below or hit me @ccstump. You can also follow me on Twitter to be aware of future articles.

Thanks for reading!


4/15/16 Updated the “Docker Compose for production” section to discuss the .env file.

4/11/16 I edited the post to remove the pushing and pulling of the DB image built by CI. We don’t customize this image so we can use the one directly from Docker Hub. This greatly speeds up the deploy.

Published in devops


  1. Ben Ben

    Have to say your three articles were nearly perfect !

    One thing i’d like to understand; i’ve created an image on dockerhub. I’d like to be able to pull it and restart from scratch somewhere else. Thing is when you pull, the environments variables weren’t sent (unicorn then fails). I was wondering how this could be achieved correctly

    • Great to hear you had success with the article series! Check out the “Docker Compose for production” section of this article again. I revised it to discuss the .env file. Thanks for letting me know about your trouble; hope you’re up and running soon.

      • ben ben

        got it
        it is easy to get lost in the docker docs; you made it all very easy to understand
        thx a lot again!

        • Happy to help. Stay tuned; I’m working on another article about securing a dockerized app with SSL. It plays nicely with this article series.

      • ben ben

        sure will!

  2. Diego Dillenburg Bueno Diego Dillenburg Bueno

    First of all, awesome guides! They’re really helping starters on the docker world.
    But, while trying to follow it I have run across a problem on the deploy section, my build keeps failing on the db migration part. Could you give me some help with it?
    It says it can’t connect to the db:
    PG::ConnectionBad: could not connect to server: Connection refused
    Is the server running on host “db” ( and accepting
    TCP/IP connections on port 5432?

    Thanks in advance!

    • Diego Dillenburg Bueno Diego Dillenburg Bueno

      I have cleaned my VPS from any existing images and it turned out to work afterwards. Only one thing that isn’t clear to me is why I’m getting bad gateway on my local development environment, should I change the server_name to localhost? If so, I’d have to change it again for deploy, right? Or tweak it so my deploy machine has a copy of the right nginx file? Any tips for that?

      Thanks in advance.

      • Thanks for the kind words! Whenever I run into a “bad gateway” error it’s usually because Nginx is up but Unicorn is down. Unicorn goes down without grace when Docker doesn’t stop properly. When that happens Unicorn usually leaves behind a tmp/pids/ file that prevents it from starting the next time you docker-compose up. Delete that .pid file and Unicorn will start. Then Nginx can connect to Unicorn and the “bad gateway” error will go away. Hope that helps!

  3. Danny Danny

    Skimmed through this post but it’s the best one I’ve found on the internets on CD for Rails apps running on Docker. Thank you!!!!!!

    • That’s great to hear, thank you for the kind words!

  4. Adam Adam

    hey Chris, nice read! can you tell me how long does it take to deploy this example? I expect the time should be pretty much the same as by using capistrano, but there should be a benefit of creating a successful image on staging and then just pulling it from production – it could save a lot of time.

    • It really depends on the app Adam. In this example we are building the image from scratch on CI and then deploying. I’d say that takes longer than a regular capistrano deploy, but the server setup is much less complicated, Docker gives you a predictable environment, and the deployment is much more hands-off. Using pre-built images — say ones used on staging that you promote to production — should be faster and simpler.

  5. Augusto Augusto

    Really helpful articles Chris!

    I really would like to read some article about continuous deployment with zero-downtime.

    Thank you very much!

    • Thanks Augusto! Yes, I’d love to know more about using Docker with zero-downtime deploys too. The only way I can think of doing it is to deploy to a “cold” server (i.e. one that is not routed to by the primary domain), and, after the deploy completes, switch DNS to that cold server (i.e. make it “hot”). That would work for the simple case.

  6. Chris Chris

    Hi Chris,

    Great articles thanks, the SSH kit piece was new to me.

    How would you suggest going about scaling apps that have followed your guides? At the moment im deploying to a single DO droplet.

    I would like to through CI and Circle, deploy to a horizontal scaling environment.

    • Thanks for the kind words! I’m not sure how I’d use this CD setup to deploy to a “horizontal scaling environment”. Let me know if you figure it out!

    • That’s a great follow up to my post. All the stuff I forgot to mention! Thanks for sharing Philippe.

  7. Horris Horris

    Hi Chris,

    Wondering if you could shed some light on how I would get Sidekiq running in the production environment.

    I added this to my docker-compose.production.yml (which I upgraded to V2):

    # service configuration for sidekiq
    context: .
    – sidekiq-data:/var/www/xxx/
    – db
    – redis
    command: bundle exec sidekiq


    But then I get the following when SSHKit is doing a ‘docker-compose up’:

    INFO [d4efb377] Running /usr/bin/env docker-compose -f docker-compose.production.yml up -d as
    rake aborted!
    SSHKit::Runner::ExecuteError: Exception while executing as docker-compose exit status: 1
    docker-compose stdout: Nothing written
    docker-compose stderr: Building sidekiq

    I suspect its to do with the:

    context: .

    There is no Dockerfile, etc in the deploy_path where the compose command is being executed?

    • You’re right, it’s most likely the build context. Unless you want to roll your own you shouldn’t need to build the Redis image. Instead, use the official image by replacing build: with image: redis:alpine. Also, I’ve had better luck with getting Docker (Compose) to cleanly stop my containers by writing commands like command: [ bundle, exec, sidekiq ].

  8. MItch MItch

    It appears that with SSHKit version 1.11.5 ( and perhaps back to 1.9.0) SSHKit::DSL must be included in deploy.rake:

    require ‘sshkit’
    require ‘sshkit/dsl’
    include SSHKit::DSL

    to avoid this error:

    NoMethodError: undefined method `on’ for main:Object

    • Dan Dan

      Thanks! I was wondering why I was getting that.

Leave a Reply

Your email address will not be published. Required fields are marked *