Adventures in Linux hosting: Learning to use Docker

In the past, I’ve tried running multiple wikis from a single LAMP stack just using different databases for each one. The problem I ran into was that anything that caused one wiki to lock up mysql caused all of the wikis to lock up at the same time. This could happen during when some bot was ignoring robots.txt and when one of our extensions was using inefficient database calls. For example, the CACAO scoreboard can be expensive to regenerate, and classes using GONUTS could bring down a different wiki.

The solution a decade ago was to buy a different server for each wiki. This is still theoretically a solution, but it’s not practical without more funding than I have. Even when we had more money, we were splitting the wikis up between machines where each machine would run one high-traffic wiki (relatively) and other low-traffic wikis.

An alternative solution is to create multiple virtual machines and treat them as different servers. This can be done, but has efficiency issues. This leads to the solution that’s come up in recent years: multiple containers using Docker. This may not be the way I go in the end but I thought it would be interesting to learn more about how containers work.

Install Docker Community Edition

First, I need to install Docker for Ubuntu. I’m going to try to install from the Docker repositories using apt following the steps at Docker docs.

sudo apt-get install apt-transport-https ca-certificates curl software-properties-common I think the first step is needed to validate the source of the docker repo via certificates

All were already the latest version. Add Docker as a source of apt repos.

curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu \
   $(lsb_release -cs) stable"

Now we can see the docker repo has been added

sudo apt-get update
Hit:1 http://us.archive.ubuntu.com/ubuntu xenial InRelease
Get:2 http://us.archive.ubuntu.com/ubuntu xenial-updates InRelease [102 kB]
Get:3 http://security.ubuntu.com/ubuntu xenial-security InRelease [102 kB]        
Get:4 https://download.docker.com/linux/ubuntu xenial InRelease [49.8 kB]          
Get:5 http://us.archive.ubuntu.com/ubuntu xenial-backports InRelease [102 kB]
Get:6 https://download.docker.com/linux/ubuntu xenial/stable amd64 Packages [3,150 B]
Fetched 359 kB in 0s (490 kB/s)                                                     
Reading package lists... Done
sudo apt-get install docker-ce

Check that it worked

$ sudo docker run hello-world
Unable to find image 'hello-world:latest' locally
latest: Pulling from library/hello-world
ca4f61b1923c: Pull complete 
Digest: sha256:445b2fe9afea8b4aa0b2f27fe49dd6ad130dfe7a8fd0832be5de99625dad47cd
Status: Downloaded newer image for hello-world:latest

Hello from Docker!
This message shows that your installation appears to be working correctly.

To generate this message, Docker took the following steps:
 1. The Docker client contacted the Docker daemon.
 2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
    (amd64)
 3. The Docker daemon created a new container from that image which runs the
    executable that produces the output you are currently reading.
 4. The Docker daemon streamed that output to the Docker client, which sent it
    to your terminal.

To try something more ambitious, you can run an Ubuntu container with:

 $ docker run -it ubuntu bash

Share images, automate workflows, and more with a free Docker ID:
 https://cloud.docker.com/

For more examples and ideas, visit:
 https://docs.docker.com/engine/userguide/

Make sure Docker launches on reboot

sudo systemctl enable docker

restart the server to test this. Seems to work. They recommend preventing Docker from updating itself on production environments, but I’m going to defer that.

The docker is owned by root, so to run it without sudo I need to add myself to the docker group

sudo usermod -aG docker $USER

Running the tutorial

Part 2 of the Docker tutorial makes a small python app. I ran into the dreaded incorrect indentation problem with Python and realized that one thing with docker is that changing the source has no effect unless you rebuild the container.

update:

Returning to the tutorial after spending some time trying unsuccessfully to get mysql instances working with systemd.

After debugging Python indentation, I was able to get the tutorial app running. To do stuff with containers, the docker command lets you manage all the containers or specific ones. From the experimentation I had done 4 days ago there were a bunch of containers that were not running cluttering things up. These  could be viewed with

docker ps -a

Containers can be removed individually using docker rm, but a shell script is needed for bulk cleanup, for example:

docker ps --filter "status=exited" | grep 'days ago' | awk '{print $1}' | xargs --no-run-if-empty docker rm

Docker commands are listed here.