Monthly Archives: January 2018

Multiple Docker mysql instances on different ports

Found a tutorial gives the following example:

$ docker run -d –name=new-mysql -p 6604:3306 -v /storage/docker/mysql-datadir:/var/lib/mysql mysql

let’s dissect what we’re passing to docker

  • -d is short for –detach. This runs the container as a daemon in the background
  • –name is the name of the container
  • -p is short for –publish. This is a port mapping where the standard mysql port for the container (3306) appears as a different port (6604) on the local machine
  • -v is short for –volume. This tells the container to use a path outside the container for storing the mysql stuff, including data. This is needed so that the data persists if the container goes away.

Tweaking this to use my setup exited with an error because the mySQL root password was not set. After struggling with an error that came from invisible crud coming from copy paste (sigh), I got this to work, where mypassword is replaced by a real password.

docker run -d –name=mysql-rep01 -p 3307:3306 –env “MYSQL_ROOT_PASSWORD=mypassword” -v /var/lib/mysql-replica01:/var/lib/mysql mysql

To connect to this mySQL instance, I need to specify the host with -h and the port to use with -P.

mysql -u root -p -P 3307 -h

This doesn’t work if I use localhost instead of

Connect it to a copy of phpMyAdmin

The Ubuntu apt installation arranges things a little differently from a manual installation. The configuration is inside /etc/phpmyadmin. In addition to the usual, there’s a directory called conf.d; anything with a .php extension is included. This allows you to create separate files for each database container.  I made one called I took out a bunch of statements that rely on global variables $dbname and $dbserver. I think this is similar to the stuff that’s commented out in, but not that the port has to be set.

 * Alternate configuration file
 * This file copies a central block from to manage connection 
 * to an alternative db server in a Docker container publishing on local port 3307

/* Authentication type */
 $cfg['Servers'][$i]['auth_type'] = 'cookie';
 $cfg['Servers'][$i]['host'] = '';

 $cfg['Servers'][$i]['connect_type'] = 'tcp';
 $cfg['Servers'][$i]['port'] = '3307';
 //$cfg['Servers'][$i]['compress'] = false;
 /* Select mysqli if your server has it */
 $cfg['Servers'][$i]['extension'] = 'mysqli';
 /* Optional: User for advanced features */
 $cfg['Servers'][$i]['controluser'] = $dbuser;
 $cfg['Servers'][$i]['controlpass'] = $dbpass;
 /* Optional: Advanced phpMyAdmin features */
 $cfg['Servers'][$i]['pmadb'] = $dbname;
 $cfg['Servers'][$i]['bookmarktable'] = 'pma__bookmark';
 $cfg['Servers'][$i]['relation'] = 'pma__relation';
 $cfg['Servers'][$i]['table_info'] = 'pma__table_info';
 $cfg['Servers'][$i]['table_coords'] = 'pma__table_coords';
 $cfg['Servers'][$i]['pdf_pages'] = 'pma__pdf_pages';
 $cfg['Servers'][$i]['column_info'] = 'pma__column_info';
 $cfg['Servers'][$i]['history'] = 'pma__history';
 $cfg['Servers'][$i]['table_uiprefs'] = 'pma__table_uiprefs';
 $cfg['Servers'][$i]['tracking'] = 'pma__tracking';
 $cfg['Servers'][$i]['userconfig'] = 'pma__userconfig';
 $cfg['Servers'][$i]['recent'] = 'pma__recent';
 $cfg['Servers'][$i]['favorite'] = 'pma__favorite';
 $cfg['Servers'][$i]['users'] = 'pma__users';
 $cfg['Servers'][$i]['usergroups'] = 'pma__usergroups';
 $cfg['Servers'][$i]['navigationhiding'] = 'pma__navigationhiding';
 $cfg['Servers'][$i]['savedsearches'] = 'pma__savedsearches';
 $cfg['Servers'][$i]['central_columns'] = 'pma__central_columns';
 $cfg['Servers'][$i]['designer_settings'] = 'pma__designer_settings';
 $cfg['Servers'][$i]['export_templates'] = 'pma__export_templates';

/* Uncomment the following to enable logging in to passwordless accounts,
 * after taking note of the associated security risks. */
 // $cfg['Servers'][$i]['AllowNoPassword'] = TRUE;

/* Advance to next server for rest of config */

Note the pulldown to go between the two servers

Then I had to create the phpmyadmin user and the create the tables for the advanced phpmyadmin features. I just imported the SQL file from phpmyadmin sql/create_tables.sql. Now it looks like this:

Connect it to a copy of MediaWiki

Created another MW directory on the host to use the containerized database. I just did the web install but use as the database location. The installer works and the correct tables are created. Copied LocalSettings to the correct location and it seems to work as a blank wiki. In LocalSettings, we see:

$wgDBserver = "";

For the port publishing done by the Docker container.

Make it restart automatically

add the somewhere before the image name.

--restart unless-stopped flag

Make it a service?

So far I’ve made a container that I can launch from the command line. The Docker docs talk about how:

Services are really just “containers in production.” A service only runs one image, but it codifies the way that image runs—what ports it should use, how many replicas of the container should run so the service has the capacity it needs, and so on.

Note that the rest of that tutorial goes beyond running a container in production. It sets up to run multiple instances that do load balancing between them, That would be nice but I’m going to save that for another time.

Data persistence!

After doing all this, I restarted the machine. the basic mysql systemd service came back up and so did the replicas. It also works if I kill the original container and start a new one that mounts the same data volume.

Adventures in Linux hosting: Multiple MySQL instances

Reading up on mySQL, I realized that rather than using Docker or VMs, the simplest thing to do is run multiple mySQL instances from the default installation.  Because the new machines are Ubuntu Linux, this can be controlled from systemd, the daemon that controls services.

There is also documentation on about using Docker and there are some concerns with containers in general and Docker in particular:

Docker containers are in principle ephemeral, and any data or configuration are expected to be lost if the container is deleted or corrupted (see discussions here). Docker volumes, however, provides a mechanism to persist data created inside a Docker container.

and it’s also not clear to me whether there are issues related to security and mysql root password storage. These are probably features for development environments, but we want a more production-like setup for the public wikis.

So, let’s try the multiple instances via systemd first and see if that works.

mysql service

On ubuntu, it seems that the service is mysql (not mysqld) and systemd controls it via

sudo service mysql stop|start|etc

mySQL configuration

mySQL configuration files are often located in obscure places, and when I was installing from a distribution on Macs, the my.cnf files were sometimes hard to find. We can ask the mysql daemon where it’s looking for configuration files:

Usage: mysqld [OPTIONS]

Default options are read from the following files in the given order:
/etc/my.cnf /etc/mysql/my.cnf ~/.my.cnf 

Mysql will look for these and the later option values override anything set in the earlier file. Additional options files can be added using include directives:

It is possible to use !include directives in option files to include other option files and !includedir to search specific directories for option files. For example, to include the /home/mydir/myopt.cnf file, use the following directive:

!include /home/mydir/myopt.cnf

To search the /home/mydir directory and read option files found there, use this directive:

!includedir /home/mydir

MySQL makes no guarantee about the order in which option files in the directory will be read.

In the default LAMP installation,

  • there is no /etc/my.cnf.
  • /etc/mysql/my.cnf is a symlink to /etc/alternatives/my.cnf. That file only has a pair of include directives
!includedir /etc/mysql/conf.d/
!includedir /etc/mysql/mysql.conf.d/

The 2 directories contain 4 .cnf files:

 total 8.0K
 -rw-r--r-- 1 root root 8 Jan 21 2017 mysql.cnf
 -rw-r--r-- 1 root root 55 Jan 21 2017 mysqldump.cnf

 total 8.0K
 -rw-r--r-- 1 root root 3.0K Feb 3 2017 mysqld.cnf
 -rw-r--r-- 1 root root 21 Feb 3 2017 mysqld_safe_syslog.cnf

The /etc/mysql/conf.d/mysql.cnf file is empty. The /etc/mysql/mysql.conf.d/mysqld.cnf file seems to be the one with the relevant options. Following the documentation, I added stanzas for replica servers on different ports to the latter and restarted the mysql service… and nothing happens, as far as I can tell. systemd does not recognize the replica services at all.

Doing lots of googling, I’m not sure what is missing, but others seem to have the same problem.

Adventures in Linux hosting: Learning to use Docker

In the past, I’ve tried running multiple wikis from a single LAMP stack just using different databases for each one. The problem I ran into was that anything that caused one wiki to lock up mysql caused all of the wikis to lock up at the same time. This could happen during when some bot was ignoring robots.txt and when one of our extensions was using inefficient database calls. For example, the CACAO scoreboard can be expensive to regenerate, and classes using GONUTS could bring down a different wiki.

The solution a decade ago was to buy a different server for each wiki. This is still theoretically a solution, but it’s not practical without more funding than I have. Even when we had more money, we were splitting the wikis up between machines where each machine would run one high-traffic wiki (relatively) and other low-traffic wikis.

An alternative solution is to create multiple virtual machines and treat them as different servers. This can be done, but has efficiency issues. This leads to the solution that’s come up in recent years: multiple containers using Docker. This may not be the way I go in the end but I thought it would be interesting to learn more about how containers work.

Install Docker Community Edition

First, I need to install Docker for Ubuntu. I’m going to try to install from the Docker repositories using apt following the steps at Docker docs.

sudo apt-get install apt-transport-https ca-certificates curl software-properties-common I think the first step is needed to validate the source of the docker repo via certificates

All were already the latest version. Add Docker as a source of apt repos.

curl -fsSL | sudo apt-key add -
sudo add-apt-repository "deb [arch=amd64] \
   $(lsb_release -cs) stable"

Now we can see the docker repo has been added

sudo apt-get update
Hit:1 xenial InRelease
Get:2 xenial-updates InRelease [102 kB]
Get:3 xenial-security InRelease [102 kB]        
Get:4 xenial InRelease [49.8 kB]          
Get:5 xenial-backports InRelease [102 kB]
Get:6 xenial/stable amd64 Packages [3,150 B]
Fetched 359 kB in 0s (490 kB/s)                                                     
Reading package lists... Done
sudo apt-get install docker-ce

Check that it worked

$ sudo docker run hello-world
Unable to find image 'hello-world:latest' locally
latest: Pulling from library/hello-world
ca4f61b1923c: Pull complete 
Digest: sha256:445b2fe9afea8b4aa0b2f27fe49dd6ad130dfe7a8fd0832be5de99625dad47cd
Status: Downloaded newer image for hello-world:latest

Hello from Docker!
This message shows that your installation appears to be working correctly.

To generate this message, Docker took the following steps:
 1. The Docker client contacted the Docker daemon.
 2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
 3. The Docker daemon created a new container from that image which runs the
    executable that produces the output you are currently reading.
 4. The Docker daemon streamed that output to the Docker client, which sent it
    to your terminal.

To try something more ambitious, you can run an Ubuntu container with:

 $ docker run -it ubuntu bash

Share images, automate workflows, and more with a free Docker ID:

For more examples and ideas, visit:

Make sure Docker launches on reboot

sudo systemctl enable docker

restart the server to test this. Seems to work. They recommend preventing Docker from updating itself on production environments, but I’m going to defer that.

The docker is owned by root, so to run it without sudo I need to add myself to the docker group

sudo usermod -aG docker $USER

Running the tutorial

Part 2 of the Docker tutorial makes a small python app. I ran into the dreaded incorrect indentation problem with Python and realized that one thing with docker is that changing the source has no effect unless you rebuild the container.


Returning to the tutorial after spending some time trying unsuccessfully to get mysql instances working with systemd.

After debugging Python indentation, I was able to get the tutorial app running. To do stuff with containers, the docker command lets you manage all the containers or specific ones. From the experimentation I had done 4 days ago there were a bunch of containers that were not running cluttering things up. These  could be viewed with

docker ps -a

Containers can be removed individually using docker rm, but a shell script is needed for bulk cleanup, for example:

docker ps --filter "status=exited" | grep 'days ago' | awk '{print $1}' | xargs --no-run-if-empty docker rm

Docker commands are listed here.

Adventures in Linux hosting: Basic Mediawiki

The main purpose of the home setup is for testing our mediawiki setup. Our old sites are running old versions of Mediawiki on php5.6. Our custom extensions need to be updated to work with the latest MW and php7 in order to satisfy IT security, and on general principles.

Documentation on has some tips for running on Debian or Ubuntu. This is mostly not based on using the apt package manager, although some prereqs will be done with apt. In particular, it looks like I need:

  • memcached
  • imagemagick
  • php-apcu
  • elasticsearch (the ubuntu default package is 1.73, will need to do this differently later)

I downloaded the latest release (1.30.0) to my home directory


extracted this with tar -xzf and then used cp -R to make separate wikis inside the /var/www/html directory.

  • wiki – for general testing

Set this up as blank wikis first, so that I will get a clean copy of LocalSettings. The installer complained about not finding APCu, XCache, or WinCache. APCu appears to be the preferred choice. Added to the dependency list above. To get the installer to see this, apache had to be restarted. This is different than on using MacPorts on a Mac, of course.

sudo service apache2 restart

I now just get this warning, which I will ignore for the moment.

Warning: The intl PECL extension is not available to handle Unicode normalization, falling back to slow pure-PHP implementation.

Did the generic test wiki first.

  • Kept the default db name my_wiki for this one.
  • Storage engine InnoDB (default)
  • Database character set UTF-8 (default is binary, but this makes it hard to see things in phpmyadmin)
  • Pre-installed extensions
    • ImageMap
    • Interwiki
    • Parserfunctions
    • SyntaxHighlight_GeSHi
    • WikiEditor
  • Enable image uploads
  • PHP object caching. Our current wikis use memcached, which is a different option. This was set up a long time ago, and caching definitely affects performance, but APCu was not compared at the time.

Using the web based installer, I had to download LocalSettings.php to my laptop and the scp it over to the server. This isn’t a big deal, but I suspect there’s a way to do the whole thing from the terminal.

This seems to work. The next step, however is to figure out how to do it all again using Docker so I can have multiple containers running different wikis.