Over the first weekend in October, more than two hundred developers gathered in Ghent for ArrrrCamp, a serious-yet-pirate-themed Ruby conference. I was happy to deliver a talk on using Docker for Rails development.
Below is a condensed version of the talk, which covers an introduction to containerization and the Docker ecosystem, as well as some examples of running Rails applications in Docker containers.
Chances are that if you’ve just started working with containers, you’ve worked with them via Docker. Containerization has become more widely used over the last two years because Docker has removed barriers and made it much easier to integrate containerization into your development workflow.
One important thing to remember is that Docker != containers, and in fact there are other ways to use containers than with Docker. But since Docker makes it so easy, it’s almost a no-brainer to use their tools.
One of the main benefits of using a container is that they are pretty nimble. It doesn’t take long to boot them up, and there’s relatively low overhead in terms of time and space. Another benefit is their limited scope.
A container is a self-contained execution environment, meaning that all of a container’s dependencies are contained within the container itself. This gives each of your services the autonomy to use the appropriate toolset for its role in your application without worry of conflicting with another component. You’re saved from dependency spaghetti since the dependencies only exist within the container.
Very simply put, containers are just a layer of virtualization. They don’t take the place of virtual machines, and you can even use VMs and containers together.
The point is not to stop using virtual machines altogether, but rather to increase service density by adding containers to the mix. Instead of running three services on three virtual machines, you can run the same three services — in containers — on one virtual machine. This means less money and less time to maintain your infrastructure since there are fewer VMs.
Getting started with containers may initially seem more complex, but they greatly reduce the amount of time and space needed to run your application. Spend less time provisioning, rebooting, and fighting with dependencies, and more time building what you want.
If you run Linux, install via the official packages.
Note that if you are using Docker Toolbox, you’ll need to run
eval $(docker-machine env machine_name) in order to run commands from your terminal.
The Docker Ecosystem
What started as a small project has grown to be a very powerful ecosystem of many different types of tools.
Docker focuses on three main functions: Build, Ship, and Run. The ‘Build’ part is what we’ll focus on for now, as it’s likely to be of the most immediate concern for you as a developer.
The first thing you’ll need to get acquainted with as you start to run Dockerized services is the Docker image. An image and a container are two different things; an image is run inside of a container. You can think of an image being like a class, and a container being like an instance of that class.
You can find public images on the Docker Hub, which houses Docker’s public registry. There are over 15,000 images that you can pull down and use in your own projects. You can have private repos on the hub as well. If you’re concerned about proprietary code, you can run your own registry (and you can find the registry image on the Docker Hub).
There are a few different styles of images available on the Docker Hub. The first is what I’ve called a “service image,” which is an image that you can pull down, run in a container, and have a working service to consume. A good example of this type of image is a database. You can run the database in a container and start working with it.
Another type of image is a “project base image.” These are meant to set up an environment in a container that can then run your own code. Language images (like the Ruby or Golang images) fall into this category. Just running the image won’t get you very far without adding your own code to it. It’s just meant to be a base for your own project.
You can also find official images on the Hub. These are images that are maintained by either companies or open-source communities, and they’re a good starting point for your project.
To pull down an image from the Docker Hub, you can say
docker pull image_name.
Building Your Own Docker Images
Of course, you can build your own Docker images. To do this, you need a Dockerfile. You can find the Dockerfile reference within Docker’s documentation.
Here’s a simple Rails Dockerfile:
FROM rails:4.2.4 MAINTAINER Laura Frank <email@example.com> RUN mkdir -p /var/app COPY . /var/app WORKDIR /var/app RUN bundle install CMD rails s -b 0.0.0.0
Once I’ve written my Dockerfile, I can build it by saying:
docker build -t image_name . # don’t forget the dot!
Each of the uppercase words in the Dockerfile is an instruction. In this Dockerfile, we’re using a Rails base image, and then copying an existing Rails application (in our current directory) to the newly-created
/var/app directory in the container. This is a static copy, and once the copy is completed, the only way to update code within the container is to rebuild my image. When I run a container with this image, it will start with the CMD or command of
rails s -b 0.0.0.0.
To see the images you have available on your Docker host — either from
docker pull or from
docker build — run
Building a Rails Application with Docker
There are a few goals we have as we start a Rails application with Docker.
- view the app running in the browser
- edit files in a local environment and see the changes
- run rake tasks like migrations
- see log output
Basically, we want to take advantage of the speed and isolation that Docker provides, but still have a development environment that feels natural to us.
In the previous Dockerfile example, we would have to rebuild the image every time we changed the code in order to see the changes running in a container. This is a huge pain, and you can get around it using a volume mount.
Instead of statically copying all of the code inside the container, you’ll mount your working directory as a volume inside the container (think of it like syncing folders), and then you can edit code and see the changes running inside the container without having to rebuild the image.
Here’s what a Dockerfile and
docker run string would look like when using a mounted volume for your application directory.
FROM rails:4.2.4 MAINTAINER Laura Frank <firstname.lastname@example.org> RUN mkdir -p /var/app COPY Gemfile /var/app/Gemfile WORKDIR /var/app RUN bundle install CMD rails s -b 0.0.0.0
Instead of copying everything, we’ll just copy the Gemfile and Gemfile.lock, then bundle install.
We still have to get the rest of the code inside the container, though. This is done at runtime with a
docker run -v local/project/path:/var/app -p 3000:3000 my_image_name
docker run reference is available in the Docker docs.
A couple important flags that you’ll use pretty frequently:
-p 3000:3000create a port binding rule; -ip:hostPort:containerPort
-v local/path:/path/in/containermount a volume; -v hostPath:containerPath
docker run string is a little long, and you may not want to type it each time you run your application.
Docker Compose is an application templating tool that allows you to specify all of your applications configurations in a yaml file. Then, instead of running your application directly with Docker, you can simply run it with
A Rails container identical to the one we ran above with
docker run will look like this with Docker Compose.
web: build: . ports: 3000:3000 volumes: ‘local/project/path:/var/app’
This instructs Docker to build the image from the Dockerfile in the directory, and it also specifies the port mapping and volume mounting rules just as in the previous
docker run string. This application template is stored in a file called
docker-compose.yml. Docker Compose is especially helpful when your application has more than one container.
Let’s create a Rails application with an external Postgres database container.
db: image: postgres web: build: . ports: - 3000:3000' volumes: - 'local/project/path:/var/app' command: rails s -b '0.0.0.0' links: - db
docker-compose up will pull down the Postgres image and run it in a container, as well as run the Rails application (which we’ve named ‘web’) in the same way as the previous examples.
We’ve also declared a dependency on the db container by the web container by specifying a link. This means that the web container will wait to run until the db container is running, and it also adds some special environment variables (like the IP address of the container running the database) and lines in the
/etc/hosts file for the web container.
In this example, you’ll still need to monkey with the database.yml file in the same way you would if you were running this outside of Docker container. You can see an example of this in the GitHub repo.
Running One-off Tasks with Docker Compose
What happens when you’re developing and need to run a task against one of your services running in a container? In the Rails world, a common example of this would be running a database migration.
Luckily, you can use
docker-compose run to execute one-off commands within a container. To run
rake db:migrate in the Rails container, say
docker-compose run web rake db:migrate. You’ll see the migration running, and then you can continue on your merry way.
Note that you can only run
docker-compose commands in the directory with the corresponding docker-compose.yml file. To run the above command, you’ll probably have to jump into a new terminal tab (and run
eval $(docker-machine env default) if you’re using Docker Toolbox).
If your application is running in a container, deploying it to production will involve building a new image and then distributing the new image to your hosts.
Codeship supports Docker, and you can learn more about it on our blog. Happy shipping!