If you’re somewhat into technology chances are that you already heard about Docker. It is a platform which enables you to create universal containers that can be used independently of the underlying infrastructure. For example as a tool for local development, or in test environments, or also to serve your customers on the production environment. In this post we want to introduce Docker as a tool for the development process and our lessons learned.

Why to use Docker?

Developing mostly web-projects, so far Vagrant has been the tool of choice to create a local development environment replicating the production environment as close as possible. The goal behind this is to reduce the chance of side-effect caused by the underlying systems. With Vargant – or virtual machines in general – we can ensure that the same basic operating system and dependencies are used by all developers (e.g. PHP and MySQL on a Ubuntu system). Vagrant still serves this need very well.

Yet with the trend towards microservices, there hardly isn’t the “one” web application anymore. Instead there is a multitude of services working together. For example a frontend application, a backend application and a database. Trying to replicate this with virtual machines will require more system resources than most developer machines offer. This is where Docker plays one of its strengths: The overhead on computing resources is lower than with virtual machines, making it easier to run multiple services in the form of containers at once.

We found that this encourages developers as well to maintain the logical separation in a stricter way and aids the design in a positive way. Like Vagrant, Docker runs on all major platforms, being native on a Linux host, there are now native implementations on macOS (Docker for Mac) and Windows (Docker for Windows) as well.

Getting started with Docker

Tons of resources are available for Docker, their own Getting started gives a very good overview on the terms and basic concepts.

Docker Hub

Docker Hub is a good starting point for new projects – it’s the central repository where most public Docker images are available. It works pretty much like a Package Manager: Anyone can publish their own images and everyone can pull images from there for further use. It holds images for most major programming languages and frameworks, e.g. PHP, Python, Java or Node.js. Those images offer a complete system with all necessary dependencies, often in different flavors. For example the PHP repository offers a CLI-image, one pre-configured with Apache, another one set up for FPM and some more.

Docker Compose

In terms of local environments I would always recommend to use docker-compose. It is a wrapper around the CLI of Docker and makes the configuration of a container much easier by allowing to define dependencies and mounts in a human-readable way and taking care of configuration when starting a container.

The central part is a file named docker-compose.yml: In this YAML-file we define which containers we want to have and how they should be configured. Running any docker-compose command in a folder with this file will automatically be executed on those containers.

As we will use docker-compose in the following examples, here are a few commands to get you started:

  • docker-compose up will build and start all defined containers. By default you will see the log-output of all containers. Adding -d as parameter will start the containers in detached mode.
    • Essentially it runs first docker-compose build which builds a container from an image
    • And then docker-compose start to actually start the container
  • docker-compose down will stop all containers
  • By default these commands will always affect all containers. To start or stop single ones, you can add the containername as an additional parameter, e.g. docker-compose up myAwesomeContainer will only start the container myAwesomeContainer and not interfere with any of the others.
  • docker-compose ps will show all currently active containers

Example 01: Hello World (PHP)

A simple example of a PHP-script (placed at src/index.php relative to the docker-compose.yml) could be as easy as this

version: "3"

    image: php:7-cli
    working_dir: /var/www
    command: php -S -t . ./index.php
      - ./src:/var/www
      - "8080:80"

What is happening here? First, we define that our container (named “myPhpContainer”) is based on the official PHP docker image running the latest PHP 7 CLI version (see entry image). Upon getting started (e.g. by running docker-compose up) it will download that image and create a container from it. During the start process it will make sure to mount the local directory src to the location /var/www inside the container and to forward the port 8080 of the host system to port 80 of this container. So if you go ahead and enter http://localhost:8080 in your browser it will be answered by port 80 of the container.

Once the container is running, it will change into the directory /var/www (specified as working_dir) inside the container and execute the command php -S -t . ./index.php (see command), which starts PHPs built-in server.

If you now open http://localhost:8080 in your browser, you can see the contents of your index.php file. Since the directory src/ is mounted into the container, any changes you locally make to the file src/index.php will also be directly available from the container.

Example 2: A simple database

Taking this one step further it’s fairly easy to spin up whole applications, like WordPress. Due to the fact that Wordpress requires a database, we add a second container (running MySQL) and make the Wordpress container aware of it:

version: "3"

    image: wordpress:latest
      - "8080:80"
      - db
      WORDPRESS_DB_USER: dbuser
      WORDPRESS_DB_PASSWORD: dbpassword
      WORDPRESS_DB_NAME: database
    image: mysql
      MYSQL_USER: dbuser
      MYSQL_PASSWORD: dbpassword
      MYSQL_DATABASE: database

Now we defined two services: one named wp, which is based on the WordPress image, and another one db which runs a MySQL image to host the database for our WordPress installation.

To make the wp container aware of the db container, we use the depends_on property: Now docker-compose will make sure that the database is running before it starts Wordpress. And additionally the hostname db is now resolved to that container on the Wordpress container.

The property environment allows us to define environment variables for each container. In this example we make use of the MySQL image’s ability to define initial User, Password, and Database. So on the first launch of the db container the user dbuser with the password dbpassword and the database database will be created. Since we pass those parameters also to the Wordpress container, it will use them directly for the setup process.

So if you launch the above example with a docker-compose up you can head over to http://localhost:8080 in your browser and will be greeted with the setup screen of WordPress. Yet the step to configure the database is omitted since that information has already been provided from the environment variables.

Where to go from here

The examples above will hopefully give you a starting point. From my experience there is a lot of trial and error involved, since each image has its own special features. Thanks to the architecture of Docker with intermediary images, build times are fairly low.

There are plenty of directions to go with Docker from here, so I will share a few points to keep in mind.

Persisting Data

In the example above we started a database container – yet the data of this container is nowhere persisted. If the container goes away, so does your data. And from my experience Docker containers are more likely to be deleted and recreated since they are more lightweight. So spend a thought or two on what you want to persist outside the container for such services. In most cases the documentation of such data stores come with a section on how to achieve that and is usually done by creating and mounting a volume.

File Mount Performance

For now Docker has its limitations when it comes to projects with many files on Mac and Windows hosts. Currently we have a project with 13,000 files where we noticed a heavy performance decrease of 2-3 times compared to our existing Vagrant setup. Docker is working on this and hopefully resolves this soon. For that particular project we’re sticking with a Vagrant based development system.

Yet for smaller projects the comfort of Docker makes the hardly noticeable speed difference neglect-able. On a Linux host system I wasn’t able to notice any differences regarding speed at all.

Docker in Production

In the intro I already claimed that Docker can also be used to run as a production environment. One of the major promises of it is the idea of “build once, run everywhere” – the longstanding idea that you can create a system and run it wherever you want without having to make adjustments to the system itself.

In contrast to the approach present in this post, it’s necessary to copy all files into the container instead of mounting it. This will leave you with a local container having all resources necessary for your application. To get it into a production or test environment it’s best to push your image to a private registry.

Host Resources

Spinning up containers in various locations might eat up more resources than expected. So it’s a good idea to control them once in a while and make sure there aren’t any forgotten containers running in the background.

Using Docker in various projects and playing around with all sorts of containers might fill your disk fairly quickly. So if your host is getting low on disk space it’s worth to have a look at that list of containers and images as well (docker images -a).

To conclude, there is also a nifty command to remove all unused resources occupied by Docker: docker system prune will remove all stopped containers and other unused resources.

— happy containerizing.