The Docker Revolution

Kerry Knopp
|
June 22, 2016
Image
Docker logo - full color

Our approach to using Docker

I first heard about Docker in October of 2014. It was suggested that we use it to host our sites to help contain hackers to one site at a time, should they get through our defenses. This new technology looks like it could again revolutionalize the way we used computing resources.

For those not familiar, Docker is an implementation of virtualization that, unlike a Hypervisor, uses the host kernel instead of it's own. It uses many resources of the host directly and doesn't have to have resources allocated to each container. A typical Docker container is 300-500MB in size. The container size can be larger or smaller depending on the packages installed in the build process and the base image used. Because of the way they use the host resources, a Docker container can start up in milliseconds vs a couple minutes like a legacy virtual machine. One main idea of containers is that they serve one purpose and one purpose alone. i.e. They are a web server, but not a database server too. The database is hosted in another container or on another server.

We've gone through many iterations of containers that host the sites and supporting containers. Our current generation has served us well in production. It consists of a front end NGINX-based proxy server container that listens on http and https and proxies the connections to the appropriate container hosting the web site being requested. We use AWS Elastic File Server for storing persistent data. We have several EC2 instances that run the current version of Docker and serve all our websites and applications.

We have recently finalized the next generation of our container that will host our Drupal websites. Along with the recently released version of Docker for Mac, will also enable us to move our local development to Docker vs a MAMP configuration. The hope is that by maintaining the same versions of software that is running our web sites from development to staging to production, we'll catch problems sooner which will make for smoother deployments. While the software versions will stay the same from development to production, developers will have the ability to switch between a PHP or Node version based on the project requirements with more ease.

At Code Koalas, we look for ways to automate any task that is repeatable. We continually deploy the code that our developers commit within minutes to development, staging and production web servers. To do this, the DevOps team works with developers to determine code building processes. The goal is to ensure a repeatable process from one project to the next. Within the context of the larger process, developers can modify portions to fit the needs of each particular project without sacrificing automation.

Our next generation of Drupal containers are no exception to the mindset of standardization and automation. The container was designed in a generic way. This allows for environment variables to be set at container run time, which transforms the vanilla container into a functional web server. These environment variables configure things like: 

  • Apache's root directory 
  • Git repository URL
  • SMTP credentials for forwarding email through AWS Simple Email Service (SES). 

When the container starts up, it pulls in the latest version of code from our GitLab server, sets some Drupal settings, starts PHP-FPM and Apache and then schedules a cron job to run Drupal cron jobs and to pull from Git every 15 minutes. We're in the early stages of deploying it to production. When the bugs are worked out, we'll go back and move all our current websites to it.

In my next blog post, I'll address the actual process of building our latest generations of Docker containers.

Want to talk about how we can work together?

Ryan can help

Ryan Wyse
CEO