A few weeks ago I decided to take an existing Ruby on Rails application and configure it to work with Docker. I wanted a container-based development environment that fed into a continuous integration to continuous deployment pipeline. The hope is that this style of development would eliminate the differences in environment you typically find between work laptops running OS X and staging, testing, and production servers running Linux. I also wanted to simplify server configuration and maintenance, plus make it super easy for another person to jump on the project. Docker delivers on all these hopes, but with so many different ways to use Docker it’s hard to craft a good setup. I wanted an optimal setup that allows me to work with Rails the way that I’m used to as well as deliver on the promise of containerization. This article will discuss how to make that happen, and is the first of three I plan to write on spinning up Docker-based development for a CI to CD pipeline.
Note to the reader
For your convenience, all of the code in this article is available online at GitHub. The article assumes you’re comfortable with Rails, Docker, Docker Compose, and the command line. It will not dive into Docker basics. There are tons of articles written on that subject. If you’ve never worked with Docker then I suggest you begin by following the excellent getting started guide and browsing the docs. Once you’ve gained an understanding of Docker terminology and tools then come back here to learn how to make your existing Rails application multi-container and production deployable. Finally, the application used in this article is named “docker_example”. You should change that to your application’s real name throughout the examples.
The stack and architecture
I run my Rails apps with a pretty standard stack: Nginx at the front for serving static assets and simple load balancing, Unicorn in the middle for application processing, and Postgres in the back for data storage. Docker, more specifically Docker Compose, is used to tie these three services together by orchestrating their communication and creating a multi-container, deployable application. By multi-container I mean that each service runs in its own container and communicates with other services via TCP/IP. This is more complex than if I created one container that runs all three services. A multi-container architecture, however, provides a better separation of concerns and adheres to the UNIX philosophy of “every tool (container) should do one thing, and do it well.” Ultimately, a multi-container architecture makes it easy to replace one service with another simply by swapping containers.
Step 1: dockerize your Rails app
First we need to tell Docker how to build the image we want to run our Rails app. To do that create a “Dockerfile” at the root of your Rails application with the following contents:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 |
# Base our image on an official, minimal image of our preferred Ruby FROM ruby:2.2.3-slim # Install essential Linux packages RUN apt-get update -qq && apt-get install -y build-essential libpq-dev postgresql-client # Define where our application will live inside the image ENV RAILS_ROOT /var/www/docker_example # Create application home. App server will need the pids dir so just create everything in one shot RUN mkdir -p $RAILS_ROOT/tmp/pids # Set our working directory inside the image WORKDIR $RAILS_ROOT # Use the Gemfiles as Docker cache markers. Always bundle before copying app src. # (the src likely changed and we don't want to invalidate Docker's cache too early) # http://ilikestuffblog.com/2014/01/06/how-to-skip-bundle-install-when-deploying-a-rails-app-to-docker/ COPY Gemfile Gemfile COPY Gemfile.lock Gemfile.lock # Prevent bundler warnings; ensure that the bundler version executed is >= that which created Gemfile.lock RUN gem install bundler # Finish establishing our Ruby enviornment RUN bundle install # Copy the Rails application into place COPY . . # Define the script we want run once the container boots # Use the "exec" form of CMD so our script shuts down gracefully on SIGTERM (i.e. `docker stop`) CMD [ "config/containers/app_cmd.sh" ] |
There are certain files and directories we don’t want copied over with the COPY . . command. We can ignore them with a .dockerignore at the root of the Rails app. Add to it the following:
1 2 3 |
.git .env .dockerignore |
Step 2: configure the application server
The last line of our Dockerfile references a script that does not yet exist. We need to create it, so go ahead and create a directory under config/ named “containers” and paste the following into a script named “app_cmd.sh”:
1 2 3 4 |
#!/usr/bin/env bash # Prefix `bundle` with `exec` so unicorn shuts down gracefully on SIGTERM (i.e. `docker stop`) exec bundle exec unicorn -c config/containers/unicorn.rb -E $RAILS_ENV; |
Make sure it is executable, so run chmod 775 on it for good measure. Also make sure you’ve included the unicorn gem in your Gemfile.
The script contains the command we want Docker to run when it initializes a container from our image. It will start the Unicorn server that will process our source code. We put the command in a script because we want the $RAILS_ENV environment variable honored at runtime (i.e. when the container starts). This will not work if we put the command directly in our Dockerfile. That’s because Docker strips out environment variables from the build host in order to keep builds consistent across all the different platforms it supports.
Our Unicorn server is configured with the -c option, which references a file we have to create. Open a file named config/containers/unicorn.rb and add the following:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 |
# Where our application lives. $RAILS_ROOT is defined in our Dockerfile. app_path = ENV['RAILS_ROOT'] # Set the server's working directory working_directory app_path # Define where Unicorn should write its PID file pid "#{app_path}/tmp/pids/unicorn.pid" # Bind Unicorn to the container's default route, at port 3000 listen "0.0.0.0:3000" # Define where Unicorn should write its log files stderr_path "#{app_path}/log/unicorn.stderr.log" stdout_path "#{app_path}/log/unicorn.stdout.log" # Define the number of workers Unicorn should spin up. # A new Rails app just needs one. You would scale this # higher in the future once your app starts getting traffic. # See https://unicorn.bogomips.org/TUNING.html worker_processes 1 # Make sure we use the correct Gemfile on restarts before_exec do |server| ENV['BUNDLE_GEMFILE'] = "#{app_path}/Gemfile" end # Speeds up your workers. # See https://unicorn.bogomips.org/TUNING.html preload_app true # # Below we define how our workers should be spun up. # See https://unicorn.bogomips.org/Unicorn/Configurator.html # before_fork do |server, worker| # the following is highly recomended for Rails + "preload_app true" # as there's no need for the master process to hold a connection if defined?(ActiveRecord::Base) ActiveRecord::Base.connection.disconnect! end # Before forking, kill the master process that belongs to the .oldbin PID. # This enables 0 downtime deploys. old_pid = "#{server.config[:pid]}.oldbin" if File.exists?(old_pid) && server.pid != old_pid begin Process.kill("QUIT", File.read(old_pid).to_i) rescue Errno::ENOENT, Errno::ESRCH # someone else did our job for us end end end after_fork do |server, worker| if defined?(ActiveRecord::Base) ActiveRecord::Base.establish_connection end end |
At this point you should be able to run docker build -t dockerexample_app . and successfully build the Rails application Docker image. It won’t run because we still have to setup the database but you should see the image with docker images .
Step 3: introduce Docker Compose
Since our application will be running across multiple containers it would be nice to control them all as one. That is what Docker Compose does for us. To get our app started with Docker Compose create a file docker-compose.yml in the root of your Rails app with the following contents:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 |
# service configuration for our dockerized Rails app app: # use the Dockerfile next to this file build: . # sources environment variable configuration for our app env_file: .env # rely on the RAILS_ENV value of the host machine environment: RAILS_ENV: $RAILS_ENV # makes the app container aware of the DB container links: - db # expose the port we configured Unicorn to bind to ports: - "3000:3000" |
In order to build from this config you’ll need to define RAILS_ENV on whatever host you plan to build on. Without it Compose will default the value to a blank string and give you a warning. You’ll also need to create a .env file at the root of your Rails app. In .env you can define environment variables and use them to configure your application. For example:
1 2 3 |
SECRET_KEY_BASE=06c08ac1dca74d9b20eb7bf46ba2646a9ed058f607b32d0a6df3a7c5fa9048f0318e521a3a7cfd90a2872fbfdaf9502ad1217805b608a3ec9bebddad0d56a4ab MONEY_API_KEY=ed3daa1f192f656d37c504675bbd7b20 MONEY_API_PASSWORD=topsecret |
It is strongly recommended that you .gitignore your .env file so that it doesn’t end up in your source control. If you don’t plan to use .env then you can get rid of the env_file: line.
Step 4: containerize your database
Under the links: section of our docker-compose.yml we reference a container named “db”. This will be the container we use to run our Postgres database. Creating that container is dead simple. Just add this to your docker-compose.yml:
1 2 3 4 5 6 7 8 9 10 |
# service configuration for our database db: # use the preferred version of the official Postgres image # see https://hub.docker.com/_/postgres/ image: postgres:9.4.5 # persist the database between containers by storing it in a volume volumes: - docker-example-postgres:/var/lib/postgresql/data |
Second, you’ll need to update your database.yml to be similar to this:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 |
default: &default adapter: postgresql encoding: unicode host: db port: 5432 username: postgres password: <%= ENV['POSTGRES_PASSWORD'] %> development: <<: *default database: your_dev_db_name #CHANGE ME test: <<: *default database: your_test_db_name #CHANGE ME production: <<: *default database: your_production_db_name #CHANGE ME |
ENV[‘POSTGRES_PASSWORD’] is blank by default and will work out of the box. As mentioned in the Postgres image docs you can assign a value to POSTGRES_PASSWORD and it will be honored by the Postgres container as the password of the postgres user. The environment: and env_file: Docker Compose directives are useful for setting this value from the host.
Now run docker-compose build from the root of your app to create your application and database images. Once built you can initialize your development DB with docker-compose run app rake db:create and then populate it whatever way you see fit ( docker-compose run app rake db:schema:load db:seed or docker-compose run app rake db:migrate db:seed . You can even import a Postgres dump but the details of that are beyond the scope of this article). Now we can finally run the application with docker-compose up -d . To verify the containers are up execute docker ps . You should see output similar to this:
1 2 3 4 |
> docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES bd0c625513dc dockerexample_app "config/containers/ap" 38 minutes ago Up 3 seconds 0.0.0.0:3000->3000/tcp dockerexample_app_1 77c2c3743864 postgres:9.4.5 "/docker-entrypoint.s" 38 minutes ago Up 3 seconds 5432/tcp dockerexample_db_1 |
To test your app in the browser navigate to $DOCKER_HOST:3000. DOCKER_HOST should be the IP your Docker daemon is running on. I installed Docker with the OS X version of Docker Toolbox via Homebrew, and out of the box my Docker daemon is bound to IP 192.168.99.100.
Once you see your app I suggest you quit the containers with docker-compose stop
before continuing.
Step 5: proxy your web requests
Remember, we want our development environment to be the same as the one we deploy to production. That means we need a reverse proxy, in our case the Nginx web server, to proxy requests to Unicorn. This is required for production because Unicorn is designed to be used with fast clients, like a UNIX socket or local port, not a slow client like a web browser. Also, since Unicorn is an application server, we want it doing what it does best, which is crunching and serving our application code. We don’t want it serving static assets (i.e. .js, .css, .png, etc. files that never change). Nginx is great at serving static assets so we want it do that job.
In order to get Nginx into the mix we need another Docker container. To get started add the this to your docker-compose.yml:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 |
# service configuration for our web server web: # set the build context to the root of the Rails app build: . # build with a different Dockerfile dockerfile: config/containers/Dockerfile-nginx # makes the web container aware of the app container links: - app # expose the port we configured Nginx to bind to ports: - "80:80" |
We’ll also want to tweak the app configuration. Change this line:
1 2 |
ports: - "3000:3000" |
to this:
1 2 |
expose: - "3000" |
That makes it so our Unicorn port is no longer open on the host machine. Instead it will only be available to other running Docker containers (e.g. the web container we are creating). This is more secure because it reduces the number of ports your application needs open in production. By the time we are finished the only port our application will expose to the outside world is port 80 for web requests.
Notice that our Compose web configuration references a build file that does not yet exist (config/containers/Dockerfile-nginx). Go ahead and create that file now with these contents:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 |
# build from the official Nginx image FROM nginx # install essential Linux packages RUN apt-get update -qq && apt-get -y install apache2-utils # establish where Nginx should look for files ENV RAILS_ROOT /var/www/docker_example # Set our working directory inside the image WORKDIR $RAILS_ROOT # create log directory RUN mkdir log # copy over static assets COPY public public/ # copy our Nginx config template COPY config/containers/nginx.conf /tmp/docker_example.nginx # substitute variable references in the Nginx config template for real values from the environment # put the final config in its place RUN envsubst '$RAILS_ROOT' < /tmp/docker_example.nginx > /etc/nginx/conf.d/default.conf # Use the "exec" form of CMD so Nginx shuts down gracefully on SIGTERM (i.e. `docker stop`) CMD [ "nginx", "-g", "daemon off;" ] |
Our build file, and Nginx itself, relies on a configuration file that we need to create. Open config/containers/nginx.conf in your text editor and add the following:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 |
# This is a template. Referenced variables (e.g. $RAILS_ROOT) need # to be rewritten with real values in order for this file to work. # To learn about all the directives used here, and more, see # http://nginx.org/en/docs/dirindex.html # define our application server upstream unicorn { server app:3000; } server { # define our domain; CHANGE ME server_name yourproductiondomain.com; # define the public application root root $RAILS_ROOT/public; index index.html; # define where Nginx should write its logs access_log $RAILS_ROOT/log/nginx.access.log; error_log $RAILS_ROOT/log/nginx.error.log; # deny requests for files that should never be accessed location ~ /\. { deny all; } location ~* ^.+\.(rb|log)$ { deny all; } # serve static (compiled) assets directly if they exist (for rails production) location ~ ^/(assets|images|javascripts|stylesheets|swfs|system)/ { try_files $uri @rails; access_log off; gzip_static on; # to serve pre-gzipped version expires max; add_header Cache-Control public; # Some browsers still send conditional-GET requests if there's a # Last-Modified header or an ETag header even if they haven't # reached the expiry date sent in the Expires header. add_header Last-Modified ""; add_header ETag ""; break; } # send non-static file requests to the app server location / { try_files $uri @rails; } location @rails { proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_redirect off; proxy_pass http://unicorn; } } |
At this point you should be able to build all containers with docker-compose build , and then run everything with docker-compose up -d . To verify that all three containers are up and running execute docker ps . You should see output like this:
1 2 3 4 5 |
> docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES d867759237a6 dockerexample_web "nginx -g 'daemon off" 9 minutes ago Up 2 seconds 0.0.0.0:80->80/tcp, 443/tcp dockerexample_web_1 333df34dc609 dockerexample_app "config/containers/ap" 9 minutes ago Up 2 seconds 3000/tcp dockerexample_app_1 345516bac081 postgres:9.4.5 "/docker-entrypoint.s" 11 minutes ago Up 2 seconds 5432/tcp dockerexample_db_1 |
Final test: browse to $DOCKER_HOST and you should see your app.
Working with Docker containers in development
If you made it this far I assume you’re running your Rails application in a multi-container Docker environment that is fit for development, test, and production. Congratulations! If you were to develop with this setup, however, I think you’d quickly find that it’s a pain in the ass. That’s because every change you make to your code would require you to rebuild your image and restart your containers to see the change. That’s not how we develop with Rails. We’re used to making a change and refreshing the page to see it in action. Fortunately we can achieve the same thing with Docker by adding a new file, docker-compose.override.yml, to the root of our application with the following:
1 2 3 4 5 6 7 8 9 10 11 |
app: # map our application source code, in full, to the application root of our container volumes: - .:/var/www/docker_example web: # use whatever volumes are configured for the app container volumes_from: - app |
Docker Compose automatically looks for this file and applies it on top of our docker-compose.yml configuration. That is, configuration in docker-compose.override.yml will supplement or override configuration in docker-compose.yml. This makes it very convenient for making environment-specific modifications to our Docker builds and resulting containers. To try it out run docker-compose stop && docker-compose up -d to restart your containers, make a visual change to your code as usual, and refresh the containerized app in your browser to see the change. Lastly, I like to .gitignore docker-compose.override.yml to make sure that it isn’t deployed and used elsewhere. This helps ensure only fully containerized apps run outside my dev machine.
Running tests, rake tasks, and consoles
I find it easiest to always have a shell open inside my container to work with Rails like I’m used to. To get that going run docker exec -it dockerexample_app_1 /bin/bash . That gives you a minimal shell to work in. From there you can rspec spec , rake some:task , rails c , rails db , etc. Since it’s a minimal shell you won’t have all the command line goodness you’re probably used to. You can tweak the shell, however, by building into the application image anything you need. In particular you can apt-get new packages and/or COPY over shell configurations in your Dockerfile. Just be aware that doing so will increase the size of your built image, and since these containers are meant to run in production you won’t want a full-blown dev playground.
Conclusion
With the right Docker setup a software development team can get new members up and running faster than ever before. They can also ensure a consistent environment no matter where the application is run. This goes a long way in reducing time spent on bugs that result from variations between development, test, and production environments. Lastly, Rails plays nicely with Docker. With the right Docker (Compose) configuration you can easily work with the same Ruby/Rails tools you always have, and hardly change the way you already write code.
Stay tuned to this blog, or follow me on Twitter, to be aware of follow-up articles in which I’ll discuss using the Docker setup outlined here in both continuous integration and continuous deployment environments.
Got questions or feedback? I want it. Drop your thoughts in the comments below or hit me @ccstump.
Thanks for reading!
Addendum
5/9/16 Be secure! Now that your Rails app is running in a Docker container lock it down with HTTPS for free using Let’s Encrypt.
3/17/16 I’ve published the last article of the series. Read it to learn how to use your dockerized Rails application with continuous deploy.
3/3/16 Ready for more? See the second article in this series to learn how to run your freshly dockerized Rails app in continuous integration using CircleCI.
3/2/16 My use of the .env file in this article has raised questions and some confusion. Thankfully Ryan Nickel wrote a post explaining all the ways to use environment variables with Docker, and why you should consider their use for your app. Definitely worth a read.
Great post Chris.
I’m not a ruby developer, but I am a docker enthusiast and I’m always interested in how people are using it for their various environments. I love how it’s so easy to set up systems and put various pieces together. I’ve been using vagrant for a long time now and it’s certainly suited my needs thus far, but it’s VERY heavy in comparison to docker.
I wasn’t aware that docker would default to removing environment based variables during the container creation process. That’s a great nugget right there! Where did you read that in their documentation? I’m curious to read more about that.
Anyway, great post and I look forward to reading more.
Thanks Ryan! I’ve used Vagrant on other Rails projects and always found it difficult to work with: heavy on resources, slow to show changes, and not friendly to the Rails asset pipeline. Thus far I haven’t had any of those issues with Docker which has made it a joy to work with.
I can’t point to a particular section of the Docker docs that discusses the removal of host environment variables from the Docker build, but I read enough posts, Stackoverflow Q&As, and GitHub issues to have the point come across. Lots of ways to work around the issue but I think they should all be used lightly. It seems the consensus is that “introducing env in the build command creates host dependent builds“, which, in the long run, will defeat the purpose of containerizing in the first place.
Nice article and very timely for me. I have been running an app on Heroku but want to Dockerize it and move it to DigitalOcean. Heroku is awesome, but it spoiled me as far as knowing how to setup a production environment.
Couple questions;
How much of a performance hit do you see running your dev environment with nginx and unicorn over something like webrick?
And would you mind sharing a sample .env file?
Thanks Steve. I don’t see much of a performance hit running Nginx + Unicorn in development, but it is one more server to deal with. One thing I do notice is that Nginx pops up faster than Unicorn when you boot the containers. That’s expected because Unicorn has to load Rails. While it’s waiting for Unicorn Nginx will give a “502 Bad Gateway” error. Just keep refreshing and it will go away once Unicorn is up.
A simple .env file looks like this:
S3_BUCKET=YOURS3BUCKET
SECRET_KEY=YOURSECRETKEYGOESHERE
It’s just ENV_VAR=VALUE. No spaces around the = sign.
BTW, I run the setup outlined here on Digital Ocean droplet built from their 1-click Docker app and it does awesome!
Great, I’ve got it mostly working except I can’t seem to get the environment variables right. Using your example added to the .env, then docker-compose up -d, should I be able to get the following on the host console?
> echo $S3_BUCKET
YOURS3BUCKET
Not on the host console, on the app container console (accessible via
docker exec -it dockerexample_app_1 /bin/bash
). We configure Docker Compose to have the app container read .env when it boots. That way we can use the environment variables defined in .env to configure the Rails application run by Unicorn within the app container.Great post, I applied a similar model to my project. With a different that we use 2 images.
1st is a base image base on ruby, with all packages, main gems.
app build images base on base image, just update code then bundle.
That helps me much when building image.
Great post. Looking forward to the follow-up articles.
Thank you Chris!
my pleasure, thanks for reading
I want to add something it took me a while to get a grip on, ‘use docker-compose stop’ instead of ‘ctrl c’ when you want to stop containers to get around the server is already running error. If the container does not shutdown gracefully, it leaves a pid file in /temp/pids that keeps the container from starting next time. All you have to do is delete it, but it can be a pain in the a… It’s an area that Rails and Docker don’t play well together in a dev environment.
Thanks again Chris for this great tutorial!
Yup, that’s a good point. To be clear it is Unicorn, not Docker, that is leaving the pid file on an ungraceful shut down.
Glad to hear you’re up and running Steve!
Just dropping a line to say thanks, this makes a lot of things click… Did you consider making the same sort of tutorial for docker-machine / docker-swarm ? That’d be really cool !
Thanks for reading Tom. I haven’t used docker-swarm yet, and my only experience with docker-machine is through Docker Toolbox. If I dig into either tool I’ll write about it. Thanks for the encouragement!
Thanks for the in-depth guide Chris!
I’m trying to set up a simple Heroku-like git deployment process from my dev machine onto my DigitalOcean instance using Docker Compose. Any tips on automatically rebuilding the image and deploying to prod?
Thanks for reading Jonathan! Timely question. I just posted an article on continuous deployment that I think will help you. Have a read; see if you can find what you need there.
Just have another quick question, have you run into the case where you want different nginx.conf files for dev and prod?
How would one handle this problem?
Nope, I haven’t had that need. I suggest you avoid it to minimize differences between dev and prod. The more identical the environments are the better off you’ll be.
hey man,
thanks a lot for this post.
A few things I stumbled upon.
Correct indentation of the docker-compose.yml file.
And it would have been awesome if you could have posted your .env file.
As intended, it is not checked in and visible in github.
Really look forward to your next posts. I bought “Using Docker” and it is a really good book but unfortunately, the example uses python. This was where your post came into play.
thanks again
Thanks for the feedback Jonny! Glad you found the article useful. You’re one of many who wants to see a .env file so I updated step 3 with an example. Hope you find it useful. Also, I finished this series with the follow up CI and CD posts. See the addendum for links or go to the home page.
This was great, thank you! I got a little hung up on the app name for some reason (I was trying it on one of my own projects). Anyway finally got it all working and it’s working perfect.
Thanks again.
Great to hear, thanks for reading!
I’m actually running into an issue that I can’t figure out. I got it all up and running on one app however when I try it on another everything works up until the end. Doesn’t seem like docker-compose is creating the /tmp/pids dir and so unicorn fails with “directory for pid=/var/www/{my app}/tmp/pids/unicorn.pid not writable (ArgumentError)”. I have tried creating the directory manually and even added a few statements to the Dockerfile which shows it created at the end of step 2. I’m at a bit of a loss, it was a fresh app and even deleted it and started all the way over. Any thoughts?
I assume you’re running into this problem during development. Does $RAILS_ROOT/tmp/pids exist in your local copy of the source code? If not, create it and set permissions to 755. Remember that, in development, /var/www/{your app} is “overwritten” by your local copy of the source code. So you may be making /var/www/{your app}/tmp/pids explicitly in the Dockerfile but that won’t matter once the container boots and docker-compose.override.yml app/volumes maps . to /var/www/.
That was the trick, thank you again. I had tried to add the pids dir and when looking at its permissions next to the previous app they looked the same so I didn’t even think about setting them manually.
Thanks again for your help and this article!
Awesome, glad to hear that worked. Stay tuned to the blog; I’m working on an article that follows this one that’s about securing your dockerized app with SSL for free.
Your explanation work extremely well for me until the docker-compose.override.yml part. When I use your github example project and run docker-compose up (after a db:create), I get:
db_1 | ERROR: database “docker_example_dev” already exists
db_1 | STATEMENT: CREATE DATABASE “docker_example_dev” ENCODING = ‘unicode’
Weird.
I also applied your explanation on a project of mine that uses Puma. It works, but when I modify a file on my local machine and refresh the browser, the change does not get reflected. docker-compose.override.yml seems to be taken into account though, and when I bash into the app container, the modification is reflected. I guess for some reason the server does not live reload, even though I am in development mode? I need to investigate that further but any idea would be appreciated.
Sorry you ran into trouble. Not sure why your persisted DB would be giving Postgres trouble. Must be a config issue. Let me know if you get Puma working!
Ok I got Puma working! The problem was not Puma. I am using Rails 5 rc1. Rais 5 introduced some “improvements” to code reloading, but it doesn’t seem to work for me with Docker on OS X. So in config/environments/development.rb, I replaced
config.file_watcher = ActiveSupport::EventedFileUpdateChecker
with
config.file_watcher = ActiveSupport::FileUpdateChecker
and it works. Serves me well for trying “cutting edge” projects. 😛
PS: it seems that for the ActiveSupport::EventedFileUpdateChecker file watcher, the change event does not occur for docker-machine shared files. So it seems to be more of a docker-machine problem.
Nice work, thanks for sharing!
Just want to say thank you for the post. I have studied various rails docker strategies and this one is so far the best I have seen.
Just one question: Do you also encounter that the logs for the app container are empty? A symbol link for the app_cmd.sh to stdout is not working. Any hint? It makes it hard to debug the container.
Thank you! Glad you found the article useful.
So $RAILS_ENV.log is empty? I don’t have that issue. In development you should be able to access development.log from the app container and the host. In production it should be accessible from, and persisted by, the app container. I don’t use symbolic links in my setup.
Yes, I can see the logs, but “docker logs container_name” is empty. I use Kitematic and my container log is also empty. That makes it very difficult to debug. I can’t get the log working, so it outputs the app_cmd.sh execution (e.g. exec 1>rails.log 2>&1
). I guess that goes beyond your article, but maybe someone has the same issue. I wonder whether the ruby slim image (debian jessie) has no such log procedure for a bash file.
Ah, yeah, I’m not sure. I don’t use
docker logs
so I have no idea how it would be affected (if at all) by the setup described in the article.I solved it. In case someone needs also the output in docker logs (e.g. I implement it in docker cloud through a stack file)
Add to app_cmd.sh the following:
exec 3>&1 4>&2
trap ‘exec 2>&4 1>&3’ 0 1 2 3
exec 1>rails.log 2>&1
And to the Dockerfile:
RUN touch rails.log
RUN ln -sf /proc/self/fd/1 rails.log
CMD [ “config/containers/app_cmd.sh” ]
Nice work! Thanks for sharing.
Hey Chris, does this also redirect the development.log or just the unicorn logs? Where exactly to you put the three lines in app_cmd.sh?
All of your logs will be under $RAILS_ROOT/log as usual. app_cmd.sh belongs under $RAILS_ROOT/config/containers (i.e. $RAILS_ROOT/config/containers/app_cmd.sh).
Hello Chris,
first of all I’d like to thank you for the great tutorial, it got me a long way to understanding docker.
I’d like to make some suggestions:
a) I don’t get why you have the try_files block inside the assets nginx definition. This will have every request hit the backend server, right? On production you will want to remove that line
b) if you are not serving assets from the production app (what you should definitely try to avoid) you will definitely need to map RAILS_ROOT/public to a volume, so nginx can access the static files. This means adding a VOLUME export from the rails dockerfile after copying the App files, though it can be done with docker-compose.yml as well.
Rails then needs to be included with volumes_from in the nginx docker-compose config. Only now will nginx be able to access and serve the files
c) asset precompilation. if following the rails recommendations, you will use digested, precompiled assets Those can easily be genearated by using env_files entries for the rails container. Then change the startup command to “rake assets:clear && rake assets:precompile && webserver_start_command”. I am using foreman inside the rails container as well, as it is easier to launch the processes, for example adding sidekiq into the mix.
Hoped to help 🙂
Maybe I did somehow miss the information or misunderstood your tutorial, but this is what I found out when trying to switch a rails app from heroku to docker, while wondering why the asset serving did not work correctly… 🙂
Gregor
Thanks for the kind words Gregor, I appreciate it. Regarding (a)
location ~ ^/(assets|images|javascripts|stylesheets|swfs|system)/ {
try_files $uri @rails;
...
}
The “location” line means “enter this block if a static asset is requested”. The $uri argument to try_files means “try to find the requested file in the root directory”. For Nginx that is public/. If assets are precompiled they live in public/assets, so Nginx will find it there and serve it directly, never passing anything to Unicorn. Unicorn (i.e. @rails) is the fallback to try_files.
In this particular article I didn’t get into specifics of serving static assets in production because it’s mostly concerned with getting a working Docker environment. I do go into details of serving assets in production in my other article “Continuous Deployment of a Dockerized Rails Application”. See the “Compiling assets” section. There I describe my approach, which is to copy compiled assets into the Nginx image. It’s simple, and works like a charm 😉
Thanks a lot Chris for this awesome detailed post. I would like to know simple question based on your 3 part series.
1) I’m building containers in VM using docker-machine, so when I included docker-compose.override.yml file there is a volumes key for exposing volumes for live editing. I think this doesn’t work since the yml config masks the volume or there is no source code available in VM box(boot2docker). This lead me to errors as no app_cmd.sh file found.
2) I’m using mongodb as a db server in distinct container and the setup is yours. So, in this case if db container is stopped or destroyed then, do we loss data.
3) For continious integration what are best CI servers for dockerized rails app with easy setups.
4) Last but not least, I’m new to docker and If above questions doesn’t make sense then sorry for the disturbance, and even lastly, Thanks again for this detailed tutorial, really I understood alot from all these 3 series. : )
Sorry for posting question in reply section, since I was unable to edit my posted comment.
I would like to understand working of below lines from Dockerfile-nginx file:-
# substitute variable references in the Nginx config template for real values from the environment
# put the final config in its place
RUN envsubst ‘$RAILS_ROOT’ /etc/nginx/conf.d/default.conf
And also don’t we need to put listen 80; inside server block in nginx.conf file ?
envsubst takes references to environment variables, finds those references in a given file, and substitutes the value of those variables from the current environment into the file. Basically the line is generating an Nginx default.conf from the build environment using nginx.conf as a template.
You can specify port 80, but that’s the default so it’s not required.
(1) When I wrote this tutorial I was using Docker Toolbox for Mac, which uses Docker Machine, and had no problem with the settings I posted.
(2) You need to find out which file(s) mongo stores its data in and then persist those file(s) with a volume
(3) In my experience, Circle CI. See my other article on Docker in CI.
(4) You’re welcome. Good to hear!
Thank Chris, I implemented it properly and everything is working as expected. But, I have a short question again i.e. In my app I have feature to upload images using carrierwave inside public directory, so if I stop and delete the running container and build it again and start it, does the uploaded files gets lost ? I it gets lost than how can I persist it ?
to make data persist between containers you need to use a Docker volume
Thanks Chris, for right path.
Hi Chris, this is great article. I am just starting to learn about docker. when I cloned your repo and followed the instructions, however rails app is not able to run because of the following error.
app_1 | bundler: failed to load command: unicorn (/usr/local/bundle/bin/unicorn)
db_1 | LOG: database system is ready to accept connections
app_1 | ArgumentError: directory for pid=/var/www/docker_example/tmp/pids/unicorn.pid not writable
what steps I could have missed? how do we solve it? any ideas? I installed Docker.app on my mac, and everything seems to be working fine.
Thanks! Sounds like one of the directories under $RAILS_ROOT/tmp does not have proper permissions. Try
cd $RAILS_ROOT; chmod -R 755 tmp/
thanks for the quick reply. I got this working after I added these two lines in app_cmd.sh file
mkdir -p $RAILS_ROOT/tmp/pids
chmod -R 755 $RAILS_ROOT/tmp
I wonder why no one else had issues with permissions.
I have large sphinx index files to serve search functionality. I wonder what is the best practice to create the container with these index files. You were mentioning about creating volumes, but how do we transfer these files from env to env?
Docker volumes would help you persist the index between containers on a host. I wouldn’t recommend using the same index across environments. I’d want each environment to have its own index, much like the database (i.e. the actual index / DB is a file that remains on the host. Containers use the file).
Hey Chris thank you for you post it help me a lot since i’m a docker newby however even when everything is working properly i’m still have a small issue with my development enviroment and is that i’m not sure how to see the server logs while i’m developing and testing my app normally when you run rails c you are able to see all the logs and that is something really usefull how can access them?
Thanks Rene. Both Nginx and Unicorn keep logs under log/ . Just
tail -f
them.Hi . And thx a lot for this tuto.
I’m new with docker and I have just one question (okay may be two ;p).
In the Step 3 you tell us : ‘In order to build from this config you’ll need to define RAILS_ENV on whatever host you plan to build on’.
I don’t really understand how and where I can do that ?
Can I have some precision for this point ?
Thx again.
Bat.
You need to set an environment variable. There are many ways to do this. Have a look at this article if you are unfamiliar.
thx a lot.
;p
Hi Chris,
Thank you for sharing your experience. I just start to learn about docker. This is a great article which help me a lot.
However, I still have a problem when following your step.
When I run ‘docker-compose up -d’, the app server cannot be up and always exit with code 1. Then I tried to figure out the problem with ‘docker-compose up app’ and it show the following code:
app_1 | bundler: failed to load command: unicorn (/usr/local/bundle/bin/unicorn)
app_1 | OptionParser::MissingArgument: missing argument: -E
app_1 | /usr/local/bundle/gems/unicorn-5.0.1/bin/unicorn:110:in
block in '
new’app_1 | /usr/local/bundle/gems/unicorn-5.0.1/bin/unicorn:10:in
app_1 | /usr/local/bundle/gems/unicorn-5.0.1/bin/unicorn:10:in
'
load’app_1 | /usr/local/bundle/bin/unicorn:17:in
app_1 | /usr/local/bundle/bin/unicorn:17:in `’
I think it might be the problem of the wrong dictionary because I cannot find the file /usr/local/bundle in my computer. The correct dictionary for unicorn should be /home/.rvm/gems/ruby-2.3.0/gems/unicorn-5.0.1. But I do not know where to change the dictionary. What should I do?
Thank you very much!
I’m glad you found the tutorial helpful Bob. Looks to me like you’re missing the ‘-E’ argument to unicorn. Take a look at your app_cmd.sh, and compare it to the one in the article. Also, /usr/local/bundle/bin is in the container, not your computer. That path is correct. You shouldn’t be using RVM in the container.
I’m getting the same issue and have checked the app_cmd.sh multiple times and copied it directly from above. Any ideas?
bundler: failed to load command: unicorn (/Users/AnthonyEmtman/.rbenv/versions/2.3.1/bin/unicorn)
OptionParser::MissingArgument: missing argument: -E
/Users/AnthonyEmtman/.rbenv/versions/2.3.1/lib/ruby/gems/2.3.0/gems/unicorn-5.1.0/bin/unicorn:110:in
block in '
new’/Users/AnthonyEmtman/.rbenv/versions/2.3.1/lib/ruby/gems/2.3.0/gems/unicorn-5.1.0/bin/unicorn:10:in
/Users/AnthonyEmtman/.rbenv/versions/2.3.1/lib/ruby/gems/2.3.0/gems/unicorn-5.1.0/bin/unicorn:10:in
'
load’/Users/AnthonyEmtman/.rbenv/versions/2.3.1/bin/unicorn:23:in
/Users/AnthonyEmtman/.rbenv/versions/2.3.1/bin/unicorn:23:in `’
I’m pretty sure this is because you don’t have RAILS_ENV set in the shell that you are using to work with docker, and it’s not in your .env. Add it to either and you should be good.
I have RAILS_ENV set to development in both my shell and .env file. ‘development’ pops up if I run ‘echo $RAILS_ENV’ in terminal. Then I run the ‘docker-compose build’, ‘docker-compose run app rake db:create’, ‘docker-compose run app rake db:migrate’, and ‘docker-compose up -d’ and everything runs fine. I run ‘docker ps’ and all seems well, but I get errors at ‘0.0.0.0:80’, ‘0.0.0.0:3000’, ‘192.168.99.
100:80', '192.168.99.100:3000', and 'localhost:3000'. (I've been trying them all for my sanity...)
I've run this in both the new app I'm creating and an app that's currently using this tutorial (i.e. I had everything working, the app still deploys to production on 'git commit', but I have the same problem now that I can't figure out how to view the local site).
Running the 'bundle exec unicorn -c config/containers/unicorn.rb -E $RAILS_ENV' command directly results in:
bundler: failed to load command: unicorn (/Users/AnthonyEmtman/.rbenv/versions/2.3.1/bin/unicorn)
TypeError: no implicit conversion of nil into String
/Users/AnthonyEmtman/.rbenv/versions/2.3.1/lib/ruby/gems/2.3.0/gems/unicorn-5.1.0/lib/unicorn/configurator.rb:534:in expand_path’
/Users/AnthonyEmtman/.rbenv/versions/2.3.1/lib/ruby/gems/2.3.0/gems/unicorn-5.1.0/lib/unicorn/configurator.rb:534:in
working_directory'
reload’config/containers/unicorn.rb:5:in
/Users/AnthonyEmtman/.rbenv/versions/2.3.1/lib/ruby/gems/2.3.0/gems/unicorn-5.1.0/lib/unicorn/configurator.rb:72:in
instance_eval'
reload’/Users/AnthonyEmtman/.rbenv/versions/2.3.1/lib/ruby/gems/2.3.0/gems/unicorn-5.1.0/lib/unicorn/configurator.rb:72:in
/Users/AnthonyEmtman/.rbenv/versions/2.3.1/lib/ruby/gems/2.3.0/gems/unicorn-5.1.0/lib/unicorn/configurator.rb:65:in
initialize'
new’/Users/AnthonyEmtman/.rbenv/versions/2.3.1/lib/ruby/gems/2.3.0/gems/unicorn-5.1.0/lib/unicorn/http_server.rb:76:in
/Users/AnthonyEmtman/.rbenv/versions/2.3.1/lib/ruby/gems/2.3.0/gems/unicorn-5.1.0/lib/unicorn/http_server.rb:76:in
initialize'
new’/Users/AnthonyEmtman/.rbenv/versions/2.3.1/lib/ruby/gems/2.3.0/gems/unicorn-5.1.0/bin/unicorn:126:in
/Users/AnthonyEmtman/.rbenv/versions/2.3.1/lib/ruby/gems/2.3.0/gems/unicorn-5.1.0/bin/unicorn:126:in
'
load’/Users/AnthonyEmtman/.rbenv/versions/2.3.1/bin/unicorn:23:in
/Users/AnthonyEmtman/.rbenv/versions/2.3.1/bin/unicorn:23:in `’
Considering I know that the app that I have is working CICircle and deploy to production-wise, I’d assume that it’s a configuration problem locally, but I’m not sure what it is…? Any thoughts? I’ve about exhausted what I can find on StackOverflow. Thanks for the help, much appreciated!
Sorry, hard to tell by looking at your output. Try running
docker-compose up
(no -d) and you might get more info. Also, check to make sure that unicorn isn’t leaving a stale PID file in $RAILS_ROOT/tmp/pids. Good luck!The only thing that displays is…
Anthony-MacBook-Pro:emtman AnthonyEmtman$ docker-compose up
Starting emtman_db_1
Starting emtman_app_1
Starting emtman_web_1
Attaching to emtman_db_1, emtman_app_1, emtman_web_1
db_1 | LOG: database system was shut down at 2017-01-26 18:01:38 UTC
db_1 | LOG: MultiXact member wraparound protections are now enabled
Hey Chris, so merely by changing the ports in dockerfile-nginx from “80:80” to something different, like “8081:80”, I was able to access the app via 192.168.99.100:8081. Any idea what could be causing this? It seems as though port 80 is directing to a DNS and trying to load the site from the web as opposed to locally (potentially like a value is cached?).
Sounds like you have application on your machine that is interfering with Docker on port 80. Could be a firewall, a web server, application server, etc. Using a port other than 80 for dev work is fine, I’d just roll with 8081 or whatever.
Great post, but I only made it to step 4 when we did “docker-compose up -d”. When I run “docker ps -a” it seems that the “dockerexample_db_1” container is running, but “dockerexample_app_1” container had exited 1s ago.
Also I’m getting “The RAILS_ENV variable is not set. Defaulting to a blank string.” when I run “docker-compose run -e RAILS_ENV=development app rake db:create”. I’m unsure if this is related (doubtful).
Any help is much appreciated.
DOH! I’m a dummy trying to learn something new in the wee hours of the morning. I woke up and defined $RAILS_ENV in .env, ran “docker-compose up -d”, and both containers are now running. Excited to dive into Part 5. Thanks again for a good tutorial!
Glad you’re up and running, and that you found the tutorial useful
One more question… now that I have this working, do I simply just push up the image generated from “docker-compose build” to Docker Hub and have my co-workers pull that down and execute “docker-compose build” to get their environment all set up? I am a little confused moving forward on how to have others use this image(s) as well.
Pretty much. They won’t even have to build it,
docker-compose up
should do it.How would you scale this? Currently I feel like nginx will only talk to the app, what if you’re running 3 containers of the app, how can you get nginx to in a sense loadbalance the rails instances?
Good question. I have not tried to scale this setup. However, since Nginx can do simple round-robin load balancing my first instinct is to define multiple app containers in docker-compose.yml and then change nginx.conf to balance between them.
Chris! A quick google search for “docker container from existing rails project” returned your blog as the first result. How cool! I hope you are doing well. We have made great progress on Centro platform.
Hey that is pretty cool! Thanks for dropping a line Devin. I’m doing well and hope you and the other Centro folks are too. Way to get that platform going!
Hey, just found this (Dec 2016), thanks for the writeup. I tried it with ruby 2.3 and it failed, saying there was no git installed (I thought that came with build-essential?). Anyway, I added git to the packages installed with apt-get and voila, it works. Cheers!
Thanks Robert, that’s interesting. Git shouldn’t be a requirement but it’s possible some package changed and is now making it one. Please let us know if you figure out which package forced the requirement. Thanks!
Hey Chris,
stellar article, thanks a lot! One thing I had to change:
It seems they recently changed how to reference “dockerfile” in docker-compose files, see here: https://docs.docker.com/compose/compose-file/#/dockerfile. Both “dockerfile:” and “context:” go under “build:”.
Cheers
Will
Thanks Will! Yes, docker-compose.yml format is different these days. You are referencing version 2 and the article uses version 1.
I am a little confused here and could use your help.
My Rails app is currently deployed on Heroku, and I use external services for MySQL (ClearDB), ElasticSearch, Redis (RedisTOGo) and now I want to move my rails app and the associated services to use AWS services.
First step was to dockerize my Rails app which I did by following your article.
But I do not want to package and add DB as a separate container.
If i do that, how would my staging and production environments connect to RDS or other external services ?
I want them to connect to those services using the env variables for app.
Can you help me how to proceed ?
Hi Murtuza. That is a different setup from what I work with, but I think all you have to do is ditch the DB image, add your DB connection env vars to .env, and modify your database.yml to use those env vars to connect to the remote DB.
I’m getting the same problem as Ben: at the end of step 4, when I run “docker-compose run app rake db:create”, the “dockerexample_app_1” container shows as exited with code (0). If I continue to “docker-compose up -d” then “dockerexample_db_1” runs just fine. However, I *did* define RAILS_ENV in my .env file, and I’m not getting the “The RAILS_ENV variable is not set. Defaulting to a blank string.” message. “docker logs ” returns nothing and when I run “docker inspect ” it shows “OOMKilled”: false. I wonder if you have any ideas?
Thanks, I am new and this tutorial has been very helpful!
I’ve seen stuff like that happen when the app container can’t talk to the db container (i.e. get a database connection). Try running
docker-compose up
(no -d) and you’ll likely get more info.Hey Chris
Great article, Thank you.
I am new to docker and rails. When I run ‘docker-compose up -d’, the
*db* and *web* are OK, but the app STATE is 'Exit 1'.
When I run '
docker-compose up` (not -d) , I get this: *exited with code 1* .
The experiment OS is CentOS 7.
I set the $RAILS_ENV at the host , and create .env file in the root of the rails app in the host.
Any help would be greatly appreciated.
I am total confused by the $RAILS_ENV and .env file.
1. If the value of the $RAILS_ENV is
developemnt
orproduction
etc. Like this
$RAILS_ENV=production
?2. Where to create it, in the host machine or in the container? The container start up need the $RAILS_ENV , so I think it must be created before the container startup, is this right? If in the container, how to do that?
Sorry for lots of question.
Both $RAILS_ENV and .env are used to “load” the container with configuration values from the host machine. The .env file should be created on the host machine in $RAILS_ROOT. RAILS_ENV should be defined in the environment you use to start the container (i.e. the shell you use to run
docker-compose up
). I hope that helps! Good luck.@Chris It’s works. At last I selected
dotenv-rails
for this. Thank you for your explain.I’m glad you found the article useful! Did you check to make sure unicorn didn’t leave its pid file lying around in
$RAILS_ROOT/tmp/pids/unicorn.pid
? Unicorn will do that if it shuts down dirty, and an existing pid file prevents new unicorns from starting. Justrm
it.Till now, the application can startup, the index page can display. It’s a great milestone for me. Thank you again!
But There is another error:
Looks like the SCSS , image_tag doesn’t working.
I try
docker-compose run app rake assets:precompile
but no luck.Any idea, please.
Sorry, it’s not clear to me what the issue may be.
assets:precompile
should work in the container. Keep trying & good luck!Sorry, it must be my expression. I use
docker exec -it
into the
webcontainer, and
ls public/assets
, I can’t find the images and css etc that the site needs.What’s the proper time to run
assets:precompile` ?
Maybe it can be write in a specific Dockerfile, and how?
Thank you.
Hi. Thanks for the great article!
One question: What steps do I have to go through when adding a new gem to the Gemfile? Re-buidling the image? How?
Thank you!
Add your gem to your Gemfile and run
docker-compose build
. That should do it.Hi,
I have deployed my rails application on “DigitalOcean server”. Everything is working fine. But when I try to run ant query(i.e User.first) in “rails console” then I get the following error.
`PG::ConnectionBad: could not translate host name “db” to address: Name or service not known
When I remove “host: db” line from database.yml and then run any query inside rails console then I get following error.
PG::ConnectionBad: could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket “/var/run/postgresql/.s.PGSQL.5432”?
I have configured all changes accordingly. Everything is working fine but I am unable to run any query inside rails console. I don’t know I am doing wrong. Please help me in this.
It sounds like you aren’t running the console from inside the container. You need to run
docker exec -it app /bin/bash
first to enter the running container and then runrails c
.It worked, thanks 🙂
Hi,
I followed these instructions and was able to successfully run on rails 4 environment.
Then I upgraded rails version to rails 5.1.1 and everything is working fine on “localhost”. After this whenever i run “docker compose up” my application container always exited with exit code 1 and there is not any information in “docker logs container”. The rest of the two conteiner (nginx and postgres) are running successfully. I don’t know what I am doing wrong here please help me.
Before restarting the containers make sure that Unicorn hasn’t left a stale PID file lying around (
rm tmp/pids/unicorn.pid
)thanks for your response.
where should I see for “tmp/pids/unicorn.pid”. I ran “docker-compose” from my local machine connected with digitalocean via “docker-compose”. The container is not buit so I am unable to access “docker exec -it app /bin/bash” of application. And on my local system there is not any “tmp/pids/unicorn.pid” file.
this is the output of “docker ps -a”
7e9ed0034de1 myapp_web “nginx -g ‘daemon …” 5 seconds ago Up 4 seconds 0.0.0.0:80->80/tcp myapp_web_1
5199fecb61a2 myapp_app “config/containers…” 6 seconds ago Exited (1) 1 second ago myapp_app_1
2ca7fc7e2dd8 postgres:9.4.5 “/docker-entrypoin…” About an hour ago Up 5 seconds 5432/tcp myapp_db_1
I get this error using puma and postgres when trying to create the database
Couldn’t create database for {“adapter”=>”postgresql”, “encoding”=>”unicode”, “host”=>”db”, “port”=>5432, “username”=>”postgres”, “password”=>nil, “database”=>”db1”}
rake aborted!
PG::ConnectionBad: fe_sendauth: no password supplied
It looks like the Postgres image’s rules around requiring a password has changed. The docs now say:
I suggest providing a password and seeing what happens.
This helped me a while a go. I can pay it forward now. With Heroku discontinuing its docker-based container, into a more *build-your-own* Dockerfile strategy, I felt to need to still have a base docker image for my multiple rails projects. With that in mind, I’ve created https://github.com/jfloff/docker-heroku-rails A base docker image also available at Docker Hub: https://hub.docker.com/r/jfloff/heroku-rails/
Hope it also helps someone.
Nice, thanks for sharing!
My docker is up and running but when I open in browser, the portal is now shown up.
Can someone please help.