I believe continuous deployment is a worthy goal for every development team. The practice requires discipline and minimizes the risks associated with releasing large swaths of new code into the wild. I agree with the mantra that deployments shouldn’t be a big deal; that they should be easy to roll back from, and require little ceremony.
This tutorial is the last in a series of three that demonstrates how to use Docker in development, continuous integration, and continuous deployment. Follow all three and you’ll have a system where you can push code to master, watch your tests run in CI, and then have your app deployed to production. All within a matter of minutes, all within the same containerized environment.
Prerequisites
This tutorial demands a lot. First, you need a Rails application that can be built with Docker Compose. If you don’t have one then please see the first article in the series, “Docker for an Existing Rails Application“. Second, your Rails app should be setup for continuous integration (preferably with CircleCI). That is the topic of the second article in the series, “Continuous Integration of a Dockerized Rails Application“. This article builds off the other two so it assumes you are working with systems similar to what they outline. If you need a webapp to work with, or just want to get your hands on the code from all three tutorials, then checkout the sample application on GitHub.
Since we’re doing continuous deployment you’ll also need a server to deploy to. For me that’s a VPS on Digital Ocean that is built from a “one click-app”. Lastly, you need a place to push and pull Docker images (a Docker registry). I use Docker Hub. You can use whatever provider you like for these services; in my experience the ones mentioned here work well.
Continuous deployment with Docker on CircleCI
To get started overwrite your circle.yml with the one below. Sections that differ from the ones in the continuous integration article are commented. Explanations of those sections are offered after the file.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 |
machine: timezone: America/Chicago services: - docker environment: RAILS_ENV: test # unique identifier used to tag the docker images; fundamental to the deploy script DEPLOY_TAG: ${CIRCLE_BUILD_NUM}_${CIRCLE_SHA1:0:7} dependencies: override: - docker info - | cat << VARS > .env SECRET_KEY_TEST=$SECRET_KEY_TEST POSTGRES_DB=$POSTGRES_DB POSTGRES_USER=$POSTGRES_USER VARS # compile assets; assets will be copied into both the web server image and the application server image - bundle install - bundle exec rake assets:precompile - docker-compose build database: pre: - docker-compose up -d - sleep 1 override: - docker-compose run app rake db:create db:schema:load test: override: - docker-compose run app rspec spec # configure image deployment # https://circleci.com/docs/configuration#deployment deployment: # unique name of this deployment configuration hub: # name of the branch(es) this deployment section applies to branch: master commands: # tag the images we built with a repository name and deploy identifier - docker tag $(docker images | grep dockerexample_app | awk '{ print $3 }') reponame/dockerexample_app:$DEPLOY_TAG - docker tag $(docker images | grep dockerexample_web | awk '{ print $3 }') reponame/dockerexample_web:$DEPLOY_TAG # log into Docker Hub so we can push our tagged images - docker login -e $DOCKER_EMAIL -u $DOCKER_USER -p $DOCKER_PASS # push tagged images - docker push reponame/dockerexample_app:$DEPLOY_TAG - docker push reponame/dockerexample_web:$DEPLOY_TAG # run deploy script - bundle exec rake docker:deploy |
The deploy tag
In order to uniquely identify the build we define DEPLOY_TAG as an environment variable.
1 |
DEPLOY_TAG: ${CIRCLE_BUILD_NUM}_${CIRCLE_SHA1:0:7} |
An example deploy tag is 75_db4b44b. The number to the left of the underscore is the CircleCI-assigned build number. The characters to the right of the underscore are the first seven of the Git SHA1 commit under test. The CIRCLE_* environment variables used to create the deploy tag are provided by CircleCI during each build.
The deploy tag is important. It’s what we use to tag our Docker images and it’s what the deploy script uses to pull the correct images to the server. Most important, it’s what Docker Compose uses to properly start containers.
Compiling assets
In production we want the web server (Nginx) to serve our compiled assets, not the application server (Unicorn). Both servers, however, need to be aware of the same compiled asset set. That’s because the Rails application (run by Unicorn) needs the asset pipeline-generated manifest (e.g. public/assets/.sprockets-manifest-*.json) to correctly output asset URLs, and Nginx needs the actual compiled assets to serve. When Unicorn and Nginx run in the same container this is a non-issue because they can share the same public/assets directory. Sharing becomes difficult, however, when the servers run in different containers (our case). After experimenting with data volumes, I found that the easiest way to share a set of assets between containers is to simply compile them once at image build time and copy them into both images. That is what is happening in the dependencies section of circle.yml.
1 2 3 |
- bundle install - rake assets:precompile - docker-compose build |
First we install our application’s gems on the CI server so we can run rake. Then we compile our assets so they are output to the public/assets directory. Finally we build our Docker images. As seen previously, our Dockerfile and Dockerfile-nginx both copy the public directory into the built image. With each image having its own copy of the same compiled assets everything works as expected; Nginx can properly respond to requests for the fingerprinted-asset URLs output by Unicorn.
The deployment section
The deployment: section of circle.yml will not execute unless all tests pass. There can be multiple named sections under deployment: . The one run for the branch under test depends on the branch: directive. This flexibility allows you to have different deploy configurations for different environments (staging, production, etc.). In our case we’re only concerned with the master branch.
The commands: section is where we define our deploy sequence. First we tag the images that were built for testing with the DEPLOY_TAG. Next we push the tagged images to our Docker registry. Finally we run the deploy script, a rake task that gets the freshly pushed images up and running on our server. Most of this should be obvious, but there are a few things worth mentioning:
- $(docker images | grep dockerexample_app | awk '{ print $3 }') is a shell expression that parses the output of docker images to get the image ID.
- reponame/dockerexample_app:$DEPLOY_TAG is the full name of your tagged image. “reponame” should be changed to the name of the repository in your registry that you are pushing to.
- docker login -e $DOCKER_EMAIL -u $DOCKER_USER -p $DOCKER_PASS logs you into the registry so you can push. The DOCKER_* environment variables should be defined under your CircleCI project settings.
Deploying Docker images with Rake and SSHKit
The last step under under commands: is to run rake docker:deploy . The docker:deploy task is defined in a file named lib/tasks/deploy.rake. Create that file now and paste in the following:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 |
# use SSHKit directly instead of Capistrano require 'sshkit' require 'sshkit/dsl' # set the identifier used to used to tag our Docker images deploy_tag = ENV['DEPLOY_TAG'] # set the name of the environment we are deploying to (e.g. staging, production, etc.) deploy_env = ENV['DEPLOY_ENV'] || :production # set the location on the server of where we want files copied to and commands executed from deploy_path = ENV['DEPLOY_PATH'] || "/home/#{ENV['SERVER_USER']}" # connect to server server = SSHKit::Host.new hostname: ENV['SERVER_HOST'], port: ENV['SERVER_PORT'], user: ENV['SERVER_USER'] namespace :deploy do desc 'copy to server files needed to run and manage Docker containers' task :configs do on server do upload! File.expand_path('../../config/containers/docker-compose.production.yml', __dir__), deploy_path end end end namespace :docker do desc 'logs into Docker Hub for pushing and pulling' task :login do on server do within deploy_path do execute 'docker', 'login', '-e' , ENV['DOCKER_EMAIL'], '-u', ENV['DOCKER_USER'], '-p', "'#{ENV['DOCKER_PASS']}'" end end end desc 'stops all Docker containers via Docker Compose' task stop: 'deploy:configs' do on server do within deploy_path do with rails_env: deploy_env, deploy_tag: deploy_tag do execute 'docker-compose', '-f', 'docker-compose.production.yml', 'stop' end end end end desc 'starts all Docker containers via Docker Compose' task start: 'deploy:configs' do on server do within deploy_path do with rails_env: deploy_env, deploy_tag: deploy_tag do execute 'docker-compose', '-f', 'docker-compose.production.yml', 'up', '-d' # write the deploy tag to file so we can easily identify the running build execute 'echo', deploy_tag , '>', 'deploy.tag' end end end end desc 'pulls images from Docker Hub' task pull: 'docker:login' do on server do within deploy_path do %w{dockerexample_web dockerexample_app}.each do |image_name| execute 'docker', 'pull', "#{ENV['DOCKER_USER']}/#{image_name}:#{deploy_tag}" end execute 'docker', 'pull', 'postgres:9.4.5' end end end desc 'runs database migrations in application container via Docker Compose' task migrate: 'deploy:configs' do on server do within deploy_path do with rails_env: deploy_env, deploy_tag: deploy_tag do execute 'docker-compose', '-f', 'docker-compose.production.yml', 'run', 'app', 'bundle', 'exec', 'rake', 'db:migrate' end end end end desc 'pulls images, stops old containers, updates the database, and starts new containers' task deploy: %w{docker:pull docker:stop docker:migrate docker:start} # pull images manually to reduce down time end |
These are plain ol’ Rake tasks written with the DSL provided by the SSHKit gem. If you’ve ever worked with Capistrano then it probably looks familiar. That’s because SSHKit is part of the Capistrano project; it’s the gem that gives the ability to execute commands on remote servers.
Required gems
In order to run the script you’ll need to add the following to your application’s Gemfile:
1 2 3 |
group :development, :test do gem 'sshkit' end |
Environment variables
Below is a table of all the environment variables used by the deploy script. The source column shows where I set the value for each. You could easily use a different source such as a .env file or arguments to the rake docker:deploy command.
variable | purpose | source |
---|---|---|
DEPLOY_TAG | controls which images are managed | circle.yml |
DEPLOY_ENV | sets the RAILS_ENV for the deployment. Defaults to ‘production’. | default |
SERVER_HOST | the domain name or IP of the remote server the script should deploy to | CircleCI settings |
SERVER_PORT | the SSH port number on SERVER_HOST | CircleCI settings |
SERVER_USER | the user to SSH into as on SERVER_HOST | CircleCI settings |
DEPLOY_PATH | where the deploy script commands should be run. Defaults to /home/$SERVER_USER. | default |
DOCKER_USER | username at the Docker registry | CircleCI settings |
DOCKER_EMAIL | email of DOCKER_USER at the Docker registry | CircleCI settings |
DOCKER_PASS | password of DOCKER_USER | CircleCI settings |
Why not use Capistrano?
In essence Capistrano is SSHKit plus a workflow for managing deploys. It is possible to customize the workflow to deploy Docker images but in my experience you end up redefining most of the useful workflow tasks and disabling many others. Overall I felt like I was fighting the workflow instead of leveraging it to meet my needs. I think that’s because Capistrano’s workflow is meant for traditional deployments where a copy of your application is cloned to a remote server and the server provides the application’s runtime environment. It’s not built for dockerized apps whose only server requirement is a running Docker Engine. Docker image deployments are much simpler, and using SSHKit + Rake instead of full-blown Capistrano reflects this fact.
Docker Compose for production
The deploy:configs task is relied upon by most of the other tasks in deploy.rake. It copies one file to the server, config/containers/docker-compose.production.yml. Create that file now with the following:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 |
# service configuration for production database (Postgres) db: # use stock postgres image image: postgres:9.4.5 # persist the database between containers by storing it in a volume volumes: - docker-example-postgres:/var/lib/postgresql/data # service configuration for production application server (Unicorn) app: # use the application server image pulled from Docker Hub image: reponame/dockerexample_app:$DEPLOY_TAG # sources environment variable configuration for our app env_file: .env # rely on the RAILS_ENV value of the host machine environment: RAILS_ENV: $RAILS_ENV # makes the app container aware of the DB container links: - db # persist logs between containers by storing in a volume volumes: - docker-example-logs:/var/www/docker_example/log # open webapp port for containers only expose: - '3000' # service configuration for production web server (Nginx) web: # use the web server image pulled from Docker Hub image: reponame/dockerexample_web:$DEPLOY_TAG # makes the web container aware of the app container links: - app # makes the web container aware of the log volume setup by the app container volumes_from: - app # expose on the host the port we configured Nginx to bind to ports: - "80:80" |
Every task in deploy.rake that runs docker-compose uses this configuration file. It’s just like the docker-compose.yml we created previously, but with two key differences:
- We don’t specify a build: for any of the containers. Instead we specify the tagged image: that we built in CI, pushed to the registry, and pulled to the server.
- We persist everything under the log directory in a volume so that logs aren’t lost between deploys.
The first difference is what makes our continuous deployment pipeline work. It’s everything we’ve been working towards. The second difference will help preserve files that can be useful for troubleshooting problems in production. Finally, remember the use of the env_file: directive under app: . As explained before .env is used to store configuration information for your application. The file was first created in your development environment where it was .gitignore’d. In CI it is generated by the dependencies/override section of circle.yml. For production you will need to create this file by hand in the $DEPLOY_PATH of your server. For security be sure to set permissions on the file with chmod 600 .env . If you are not using a .env file you can remove the env_file: directive from docker-compose.production.yml.
Rolling back a deploy
At this point in the tutorial you should have continuous deployment from CI up and running successfully. That’s awesome, but what happens when something goes wrong? In that case you’ll likely want to revert to a previously deployed image. Luckily that’s easy to do with these steps:
- SSH into your server
- Export useful variables
1234567891011# should be value set during deployexport DEPLOY_PATH=$HOME# deploy.tag is created by deploy:start during deployexport DEPLOY_TAG=$(cat $DEPLOY_PATH/deploy.tag)# makes for easy typing belowexport DC_FILE=$DEPLOY_PATH/docker-compose.production.yml# should be value of $DEPLOY_ENV during deployexport RAILS_ENV=production - Get a list of all known images, minus the last one deployed, with
docker images | grep -v $DEPLOY_TAG. You’ll see output like this:
1234REPOSITORY TAG IMAGE ID CREATED SIZEccstump/turks_web 79_e44de36 233dd82fe12d 18 hours ago 211.1 MBccstump/turks_app 79_e44de36 86d1fb04a0fa 18 hours ago 844.2 MBccstump/turks_db 79_e44de36 70c863fe8cf2 6 weeks ago 263.1 MB - Copy the tag you want to roll back to (most likely the top TAG. You can confirm by looking at the age of the image in the CREATED column)
- Stop the running application with docker-compose -f $DC_FILE stop
- [OPTIONAL] Rollback migrations with docker-compose -f $DC_FILE run app rake db:rollback STEP=X , where “X” is the number of migrations you need to rollback
- Run the old version of your application with DEPLOY_TAG=79_e44de36 docker-compose -f $DC_FILE up -d. Make sure DEPLOY_TAG is set to whatever you copied in step 4.
Conclusion
Software applications are a reflection of the teams that create them. Like any team, company, or person applications are constantly breaking down, being repaired, having new things added, and old things removed. For the humans involved this is a continual process, not something that occurs on a set schedule every few weeks. Software applications should work the same way, and that vision becomes reality when continuous integration is followed by continuous deployment.
Got questions or feedback? I want it. Drop your thoughts in the comments below or hit me @ccstump. You can also follow me on Twitter to be aware of future articles.
Thanks for reading!
Addendum
4/15/16 Updated the “Docker Compose for production” section to discuss the .env file.
4/11/16 I edited the post to remove the pushing and pulling of the DB image built by CI. We don’t customize this image so we can use the one directly from Docker Hub. This greatly speeds up the deploy.
Have to say your three articles were nearly perfect !
One thing i’d like to understand; i’ve created an image on dockerhub. I’d like to be able to pull it and restart from scratch somewhere else. Thing is when you pull, the environments variables weren’t sent (unicorn then fails). I was wondering how this could be achieved correctly
Great to hear you had success with the article series! Check out the “Docker Compose for production” section of this article again. I revised it to discuss the .env file. Thanks for letting me know about your trouble; hope you’re up and running soon.
got it
it is easy to get lost in the docker docs; you made it all very easy to understand
thx a lot again!
Happy to help. Stay tuned; I’m working on another article about securing a dockerized app with SSL. It plays nicely with this article series.
sure will!
First of all, awesome guides! They’re really helping starters on the docker world.
But, while trying to follow it I have run across a problem on the deploy section, my build keeps failing on the db migration part. Could you give me some help with it?
It says it can’t connect to the db:
PG::ConnectionBad: could not connect to server: Connection refused
Is the server running on host “db” (172.17.0.2) and accepting
TCP/IP connections on port 5432?
Thanks in advance!
I have cleaned my VPS from any existing images and it turned out to work afterwards. Only one thing that isn’t clear to me is why I’m getting bad gateway on my local development environment, should I change the server_name to localhost? If so, I’d have to change it again for deploy, right? Or tweak it so my deploy machine has a copy of the right nginx file? Any tips for that?
Thanks in advance.
Thanks for the kind words! Whenever I run into a “bad gateway” error it’s usually because Nginx is up but Unicorn is down. Unicorn goes down without grace when Docker doesn’t stop properly. When that happens Unicorn usually leaves behind a tmp/pids/unicorn.pid file that prevents it from starting the next time you
docker-compose up
. Delete that .pid file and Unicorn will start. Then Nginx can connect to Unicorn and the “bad gateway” error will go away. Hope that helps!Skimmed through this post but it’s the best one I’ve found on the internets on CD for Rails apps running on Docker. Thank you!!!!!!
That’s great to hear, thank you for the kind words!
hey Chris, nice read! can you tell me how long does it take to deploy this example? I expect the time should be pretty much the same as by using capistrano, but there should be a benefit of creating a successful image on staging and then just pulling it from production – it could save a lot of time.
It really depends on the app Adam. In this example we are building the image from scratch on CI and then deploying. I’d say that takes longer than a regular capistrano deploy, but the server setup is much less complicated, Docker gives you a predictable environment, and the deployment is much more hands-off. Using pre-built images — say ones used on staging that you promote to production — should be faster and simpler.
Really helpful articles Chris!
I really would like to read some article about continuous deployment with zero-downtime.
Thank you very much!
Thanks Augusto! Yes, I’d love to know more about using Docker with zero-downtime deploys too. The only way I can think of doing it is to deploy to a “cold” server (i.e. one that is not routed to by the primary domain), and, after the deploy completes, switch DNS to that cold server (i.e. make it “hot”). That would work for the simple case.
Hi Chris,
Great articles thanks, the SSH kit piece was new to me.
How would you suggest going about scaling apps that have followed your guides? At the moment im deploying to a single DO droplet.
I would like to through CI and Circle, deploy to a horizontal scaling environment.
Thanks for the kind words! I’m not sure how I’d use this CD setup to deploy to a “horizontal scaling environment”. Let me know if you figure it out!
Thank you so much for your tutorials, they are great. After following it to deploy a rails app to digitalocean, I wrote a blog post detailing all the specific difficulties I encountered : http://philippe.bourgau.net/continuously-deliver-a-rails-app-to-your-digital-ocean-box-using-docker/.
That’s a great follow up to my post. All the stuff I forgot to mention! Thanks for sharing Philippe.
Hi Chris,
Wondering if you could shed some light on how I would get Sidekiq running in the production environment.
I added this to my docker-compose.production.yml (which I upgraded to V2):
# service configuration for sidekiq
sidekiq:
build:
context: .
volumes:
– sidekiq-data:/var/www/xxx/
links:
– db
– redis
command: bundle exec sidekiq
volumes:
sidekiq-data:
But then I get the following when SSHKit is doing a ‘docker-compose up’:
INFO [d4efb377] Running /usr/bin/env docker-compose -f docker-compose.production.yml up -d as root@xxx.xxx.xxx
rake aborted!
SSHKit::Runner::ExecuteError: Exception while executing as root@xxx.xxx.xxx: docker-compose exit status: 1
docker-compose stdout: Nothing written
docker-compose stderr: Building sidekiq
I suspect its to do with the:
build:
context: .
There is no Dockerfile, etc in the deploy_path where the compose command is being executed?
You’re right, it’s most likely the build context. Unless you want to roll your own you shouldn’t need to build the Redis image. Instead, use the official image by replacing
build:
withimage: redis:alpine
. Also, I’ve had better luck with getting Docker (Compose) to cleanly stop my containers by writing commands likecommand: [ bundle, exec, sidekiq ]
.It appears that with SSHKit version 1.11.5 ( and perhaps back to 1.9.0) SSHKit::DSL must be included in deploy.rake:
require ‘sshkit’
require ‘sshkit/dsl’
include SSHKit::DSL
…
to avoid this error:
NoMethodError: undefined method `on’ for main:Object
Thanks! I was wondering why I was getting that.