Step-by-step guide to Docker on Ubuntu

Step-by-step guide to Docker on Ubuntu


A few months ago dotCloud announced Docker at a conference. It echoed around the world, and since then everyone has been adopting it. This type of system isn’t new, a lot of companies have developed something similar to this, including Google. But, it’s the first of its kind to hit the open source market. Which is awesome, because now you and I can enjoy those luxuries too.

Docker makes use of some features built in to the Unix operating system and allows you to spin up something similar to a VM in the sense that it’s a fully containerised system and anything you install/do there is limited to only that work space. This is particularly useful for dev environments, because it allows you to setup containers for projects with their own unique dependencies that don’t interfere with any other projects. You can also mimic a cloud based system in the same manner, which gets you one step closer to dev environment that fully mimics the production environment, thus reducing the chance of errors.

The major difference between Docker and a VM is that resources aren’t containerised like they are in a VM, but also the fact that a docker instance is designed to be trashed, eventually.  Logs and important things can be kept, but ultimately these are temporary containers which allow you to do tests and break things without consequences. Which makes it ideal for a dev environment.


Installing Docker on Ubuntu

At the time of writing this really could not be simpler for Ubuntu users.

< Ubunti 13.04

If this does not work, then it’s possible this method has been deprecated and you should review the documentation for the most recent method, but I don’t foresee this to ever happen.

Ubuntu 13.04+

For those of us on the newer versions of Ubuntu, lxcdocker and some kernel drivers is also required.


A quick test

To make sure Docker is installed and running correctly do a quick test by spinning up a throw away shell container.

This will catapult you in to a new shell and you should be staring at something similar to this

From there you can do anything that you would normally do on a new system. ie, install packages, etc.


To exit your throw away shell simply type exit and your shell will be trashed.


Preparing a new container image

A neat feature is the ability to create container build files and distribute them with your team. Essentially it’s just a set of instructions that Docker will use to create a base package on the local machine. This package can then be used as the base for containers that choose it.

Let me give you a real world example to try put this in to some perspective. Say, for instance you are working on a Django project at a company. Someone gives you a Dockerfile and you build it. You can then launch an instance of that package which will run in the background with the port open ready to serve traffic. All the configuration and downloading is done for you by the rules defined in the Dockerfile so you can get straight to work. And, if you screw anything up it’s not an issue because you can just kill the instance, review the logs and launch a new instance.

Dockerfile example

Here is a cut down version of a Dockerfile I use for one of my projects:

As you will notice above there are two primary commands, RUN and CMD. RUN commands are only executed during the build stage, and CMD commands are only executed during the run stage. See the next two sections to further understand this. One really important thing to note about the Dockerfile is that there is a limit of RUN commands you can have, which I think is ~40 — after which you will get AUFS errors. So try keep the number of lines to a minimum and merge as many as possible in to one.

For a full list of Dockerfile commands see the documentation.

Example run script

You will also notice that I run a script during the run stage. This is because /etc/init.d/ scripts do not appear to launch on start-up in a container. So I have a script that launches all my needed background processes, loads my schema and working data in to the database and starts uwsgi.


Building a docker image

Once you have a Dockerfile prepared you need to build it as an image.

This will build your image and save it with the tag name dparlevliet/{{project}} for you to easily reference later. You can choose whatever tagname you want, it doesn’t have to be that format.

Launching a docker image

Using the above example Dockerfile, I build the docker container with the following command, which I will later explain

Firstly, let me explain the the -v /var/www:/var/www. This tells docker that you want to create a volume link between /var/www in the host machine and /var/www inside the container. This means that changes you make inside /var/www are available both inside and outside the container. This is useful for a development environment, because it allows you to launch a container as a daemon, but still edit code to test with. Which leads me to the -d flag, which tells docker to run as a daemon — alternatively you could use -i which states interactive, but you would need to add /bin/bash to the end of that command. The -p 9021:9022 command tells docker to expose ports 9021->9022 on the container and NAT them to those ports on the root machine too, which allows you to forward web traffic via a reverse proxy like nginx if you wish, or you could open 80 and serve it directly. Lastly -t dparlevliet/{{project}} is the tag reference that we created previously, it’s just the image that is going to be used for this container.

Docker has a lot of commands so it’s best to check the manual for a full up-to-date list.


About David Parlevliet

Dave is long time developer with a passion toward teaching. He divides his time between his wife, her cat and his projects. He recently started using twitter so make sure to follow him!