How Docker Containers simplify Microservice management and deployment

My recent personal and professional development efforts have taken a microservices approach. This escalated to having 8 services running 3 different languages across 5 different frameworks. After banging my head against the command line for a few days trying to get these to coexist I decided to try Docker containers to attempt streamline and simplify this process.

A software container is a lightweight virtualisation technology that abstracts away the complexity of the operating system and simply exposes ports to the host it runs on. You can run containers on most operating systems and platforms including all major PAAS providers. Keeping the complexity within the container means that host systems can focus on scaling and management. You also get a high level of consistency allowing you to ship containers across different servers or platforms with ease. Essentially building once, saving the image and then pulling on to each of your environments for testing and then deployment.

Utilising the micro service instance per container pattern is a great way to manage a set of services which have the following benefits. Including increased scalability by changing the number of container instances. A great level of encapsulation which means that all services can be stopped started and deployed in the same way. You are able to limit the CPU and memory at a service level. Finally containers are much faster to work with than fully fledged virtual machines and simpler to ship around to platforms for deployment. Amazon for example has built in support via their Container Service as well as Elastic Beanstalk. I have used Ansible for deployment as it has really nice Docker wrapper which makes starting, stoping, pushing and pulling images between servers only a couple of lines of code.

One of the things that you need to watch out for is container reuse as they can get quite large. Base images are a way to minimise this and promote reuse and allow you to be able to control the underlying approach to multiple repositories without code duplication. Each step within the build of the Docker container is cached so only stages that are changed need to be pushed when changed. So be careful around the order that you run scripts leaving the stages that change till last. DockerHub is like a GitHub repository for built images and makes the pushing and pulling images require minimal infrastructure and learning. You can pay for private repositories in the same way that GitHub allows you to, or you can setup your own one if you are that way inclined.

Running Docker containers locally on a Mac is pretty straight forward with Boot2Docker which spins up a local Vagrant box which has the docker daemon running on it and allows you to easily test, build, push and query the main docker repo. Kitematic was also recently acquired by Docker as an alternative to people who are adverse to the command line. There are also a large set of officially maintained base container images for running Node, Jenkins and WordPress amongst many others. You need to understand some patterns around how best to ensure that data is persisted as if you stop a Docker container you loose the data within it. However data only containers are a way around this and a pattern that allows you to persist containers without losing the data or binding it too closely to the underlying operating system.

Microservices allow us to choose the right tool for the job and Docker containers abstract some of the complexity of this approach. Utilising base images promotes code reuse and the Dockerfiles themselves are checked in with the projects so any one who pulls the project can see how it is built which is a huge bonus. Most downsides like persistence and the size of containers have strategies to minimise their impact.

I would be very interested to hear your thoughts and experiences with microservices and containerisation.

Leave a Reply

5 comments on “How Docker Containers simplify Microservice management and deployment

  1. Learning Docker really did take me to a new level and got me super familiar with the command line. Honestly as a concept, wrapping a virtual machine into a list commands is ingenious. One of the best benefits I gaind from using Docker was the speed you can unlock from caching.

    When you build the same container multiple times, Docker will cache the parts that it has processed before, and will only really start running and assembling the new parts of the container when it detects something has changed.

    I found it hugely satisfying dropping a 45min build, down to two minutes! Simply through heavy command optimisations and ensuring that the things that are likely to change are the things that Docker does last. It’s one of the many situations when it’s useful to have that full stack dev-opsy knowledge of what is being built, as this the key driver of the optimisation.

  2. I’ve really enjoyed using Docker in test systems and I think it’s a great way to make things beautifully portable. However I’m really interested in seeing how it could work in a scaled enterprise environment. The client I’m working with at the moment has high enterprise-level requirements, and I haven’t made the step as to how a swarm of docker containers can make a scalable, robust production environment with all the requirements around security, log aggregation, load balancing, monitoring and alerting, etc. In theory these things shouldn’t be overly difficult – but in practice I haven’t seen it done. Doubtless due to the fact that Docker is pretty new and enterprise outfits are usually relatively slow moving…. would love to hear about some real-life examples.

%d bloggers like this: