Print
Star InactiveStar InactiveStar InactiveStar InactiveStar Inactive
 

What Is Docker?

Docker is a system that allows processes to run in self-contained environments called “containers”. These containers are similar to Virtual Machines (VMs), in that you can run a different operating system in them to the one running on the host machine. They differ from VMs because they’re much more “lightweight” than a full VM; instead of installing a complete operating system they use the same Linux kernel as the host server, but with the specific packages for the guest OS in each container.

This is probably easier to explain with a diagram:

Virtual Machines
Virtual machines each have their own copy of the guest operating system.

With a VM, every instance has its own copy of the guest operating system, all of the libraries it needs, and your application(s) running in a totally isolated space.

Docker Containers
Containers run on a shared layer, the Docker Engine, and share access to binaries and libraries as required.

All of this means that you can download the components and build an image in a few minutes, or maybe seconds, rather than the time it normally takes to install a Linux on a machine. Once you’ve built an image, it sits on the server in the same way as any other file, but it’s available for you to start via the Docker Command Line Interface (CLI) “docker container run” command. You can use the CLI to see the images you have with:

docker image list

By running images, you create containers. If you don’t have an image, the run command will do its best to build one before it actually runs it. It really is that simple.

Can I Have A Simple Example?

Of course! How about an Ubuntu server, running the latest version of Ubuntu, which connects you to a bash shell:

sudo docker container run -it ubuntu /bin/bash

Once everything is downloaded and built, the container is created to run Ubuntu. When the command finishes, a container will be running on your machine and the command prompt you see will be running within the container. You can also run the container in “detached” mode. If you do this, Docker will start the container as a separate process, running in the background. This appears the same as a remote server and you can use similar tools, like ssh, to interact with it.

Where Does This Come From?

By default, most Docker images are retrieved from the Docker Hub. There are thousands of different images there.

Can I Customize Images?

Yes. Despite the thousands of images available in the Docker Hub, and in many other places, there’s a good chance you’ll won’t find exactly what you want. That’s not a problem – use a Dockerfile to specify a base image and the extra packages, software and settings that you want. Use docker build to create an image, and start using it.

Why Would I Want To Use Docker Containers?

If you’re developing systems that might run on separate servers, possibly with different operating systems, you can run them all on your desktop while you do it. Alternatively if you’re developing systems that need more rights than you’re willing or able to give them on your development machine, containers could be the way to go. Containers are also useful if you want to try out some software you can’t, or don’t want to, install directly on your machine – maybe there’s a media server, VPN server or web server you’d like to try out.

If you’re developing a system with a micro-service architecture, containers are a great way to implement it and, let’s be honest, having Docker AND micro-services on your resume is always good look! If you eventually want to move your micro-services to the cloud, most providers can handle containers directly without much, if any migration. So now you can get “Cloud” on your resume, as well as greatly simplifying the migration!

So, Can I Just Run Everything In A Container?

Sort of, but not quite. An important thing to remember about containers is that they’re stateless, i.e. all of their data is ephemeral and will disappear as soon as you stop them. This is great if you have error-prone software – literally “switching it off and on again” really will work. It’s not so good if you’re trying to run a database server, or a file server, or do batch processing, or anything else that needs state to be preserved.

You can get around this by mapping external directories as volumes within your container. If you do this then external directories appear as internal directories to the container, and all of the data written to them is preserved between restarts. By doing this, you get the benefits of containers and the security of external files.

If you’re going to do anything beyond the absolute basics, you’re better off creating a Dockerfile, which is a text file defining the base image to use, along with the packages you want installed, all of the configuration details you want, and any other commands you need to tie it all together.

What Else I Should Know?

Containers can even be configured to restart if the software within them crashes. This sounds like it shouldn’t be necessary, but there are many ways for even well-written software to crash. There are even more ways for it to crash if, like so much software, it’s stunningly badly written but let’s skip over that and not speak of it again. At least not for a while. Anyway, however it crashes, an automated restart can be a useful last-ditch attempt to get things working again. As someone who’s done on-call support before now, I’ve always tried to design and write systems so that they do everything they can to keep limping along before they fall over and start sending alerts outside work hours. It’s good that Docker supports this approach too.

Docker runs natively on Linux, MacOSX and Windows. Although you can't always run containers for one operating system on another, it IS possible to do it to a certain extent:

Container OS:
Linux MacOSX Windows
Host OS
Linux Yes No No
MacOSX No Yes Yes - with tools
Windows Yes - built in option No Yes - built in option

Summary

Docker is a very useful tool for creating “almost” virtual machines, where you can run isolated instances of the operating systems you want and host applications on them. It seems like a simple idea, but it’s flexible enough that its potential is almost unlimited. Single containers are useful, and are a great step into the world of “infrastructure as text”, but imagine how useful this would be if you could configure a number of containers to work together and communicate with each other. If you want to do this, you need to use something like Docker Compose or, if you’re looking at a more complex configuration, Kubernetes. For the moment, it might be best to do some experimenting with Docker and see how that goes.