Multiple Docker mysql instances on different ports

Found a tutorial gives the following example:

$ docker run -d –name=new-mysql -p 6604:3306 -v /storage/docker/mysql-datadir:/var/lib/mysql mysql

let’s dissect what we’re passing to docker

  • -d is short for –detach. This runs the container as a daemon in the background
  • –name is the name of the container
  • -p is short for –publish. This is a port mapping where the standard mysql port for the container (3306) appears as a different port (6604) on the local machine
  • -v is short for –volume. This tells the container to use a path outside the container for storing the mysql stuff, including data. This is needed so that the data persists if the container goes away.

Tweaking this to use my setup exited with an error because the mySQL root password was not set. After struggling with an error that came from invisible crud coming from copy paste (sigh), I got this to work, where mypassword is replaced by a real password.

docker run -d –name=mysql-rep01 -p 3307:3306 –env “MYSQL_ROOT_PASSWORD=mypassword” -v /var/lib/mysql-replica01:/var/lib/mysql mysql

To connect to this mySQL instance, I need to specify the host with -h and the port to use with -P.

mysql -u root -p -P 3307 -h

This doesn’t work if I use localhost instead of

Connect it to a copy of phpMyAdmin

The Ubuntu apt installation arranges things a little differently from a manual installation. The configuration is inside /etc/phpmyadmin. In addition to the usual, there’s a directory called conf.d; anything with a .php extension is included. This allows you to create separate files for each database container.  I made one called I took out a bunch of statements that rely on global variables $dbname and $dbserver. I think this is similar to the stuff that’s commented out in, but not that the port has to be set.

 * Alternate configuration file
 * This file copies a central block from to manage connection 
 * to an alternative db server in a Docker container publishing on local port 3307

/* Authentication type */
 $cfg['Servers'][$i]['auth_type'] = 'cookie';
 $cfg['Servers'][$i]['host'] = '';

 $cfg['Servers'][$i]['connect_type'] = 'tcp';
 $cfg['Servers'][$i]['port'] = '3307';
 //$cfg['Servers'][$i]['compress'] = false;
 /* Select mysqli if your server has it */
 $cfg['Servers'][$i]['extension'] = 'mysqli';
 /* Optional: User for advanced features */
 $cfg['Servers'][$i]['controluser'] = $dbuser;
 $cfg['Servers'][$i]['controlpass'] = $dbpass;
 /* Optional: Advanced phpMyAdmin features */
 $cfg['Servers'][$i]['pmadb'] = $dbname;
 $cfg['Servers'][$i]['bookmarktable'] = 'pma__bookmark';
 $cfg['Servers'][$i]['relation'] = 'pma__relation';
 $cfg['Servers'][$i]['table_info'] = 'pma__table_info';
 $cfg['Servers'][$i]['table_coords'] = 'pma__table_coords';
 $cfg['Servers'][$i]['pdf_pages'] = 'pma__pdf_pages';
 $cfg['Servers'][$i]['column_info'] = 'pma__column_info';
 $cfg['Servers'][$i]['history'] = 'pma__history';
 $cfg['Servers'][$i]['table_uiprefs'] = 'pma__table_uiprefs';
 $cfg['Servers'][$i]['tracking'] = 'pma__tracking';
 $cfg['Servers'][$i]['userconfig'] = 'pma__userconfig';
 $cfg['Servers'][$i]['recent'] = 'pma__recent';
 $cfg['Servers'][$i]['favorite'] = 'pma__favorite';
 $cfg['Servers'][$i]['users'] = 'pma__users';
 $cfg['Servers'][$i]['usergroups'] = 'pma__usergroups';
 $cfg['Servers'][$i]['navigationhiding'] = 'pma__navigationhiding';
 $cfg['Servers'][$i]['savedsearches'] = 'pma__savedsearches';
 $cfg['Servers'][$i]['central_columns'] = 'pma__central_columns';
 $cfg['Servers'][$i]['designer_settings'] = 'pma__designer_settings';
 $cfg['Servers'][$i]['export_templates'] = 'pma__export_templates';

/* Uncomment the following to enable logging in to passwordless accounts,
 * after taking note of the associated security risks. */
 // $cfg['Servers'][$i]['AllowNoPassword'] = TRUE;

/* Advance to next server for rest of config */

Note the pulldown to go between the two servers

Then I had to create the phpmyadmin user and the create the tables for the advanced phpmyadmin features. I just imported the SQL file from phpmyadmin sql/create_tables.sql. Now it looks like this:

Connect it to a copy of MediaWiki

Created another MW directory on the host to use the containerized database. I just did the web install but use as the database location. The installer works and the correct tables are created. Copied LocalSettings to the correct location and it seems to work as a blank wiki. In LocalSettings, we see:

$wgDBserver = "";

For the port publishing done by the Docker container.

Make it restart automatically

add the somewhere before the image name.

--restart unless-stopped flag

Make it a service?

So far I’ve made a container that I can launch from the command line. The Docker docs talk about how:

Services are really just “containers in production.” A service only runs one image, but it codifies the way that image runs—what ports it should use, how many replicas of the container should run so the service has the capacity it needs, and so on.

Note that the rest of that tutorial goes beyond running a container in production. It sets up to run multiple instances that do load balancing between them, That would be nice but I’m going to save that for another time.

Data persistence!


Adventures in Linux hosting: Multiple MySQL instances

Reading up on mySQL, I realized that rather than using Docker or VMs, the simplest thing to do is run multiple mySQL instances from the default installation.  Because the new machines are Ubuntu Linux, this can be controlled from systemd, the daemon that controls services.

There is also documentation on about using Docker and there are some concerns with containers in general and Docker in particular:

Docker containers are in principle ephemeral, and any data or configuration are expected to be lost if the container is deleted or corrupted (see discussions here). Docker volumes, however, provides a mechanism to persist data created inside a Docker container.

and it’s also not clear to me whether there are issues related to security and mysql root password storage. These are probably features for development environments, but we want a more production-like setup for the public wikis.

So, let’s try the multiple instances via systemd first and see if that works.

mysql service

On ubuntu, it seems that the service is mysql (not mysqld) and systemd controls it via

sudo service mysql stop|start|etc

mySQL configuration

mySQL configuration files are often located in obscure places, and when I was installing from a distribution on Macs, the my.cnf files were sometimes hard to find. We can ask the mysql daemon where it’s looking for configuration files:

Usage: mysqld [OPTIONS]

Default options are read from the following files in the given order:
/etc/my.cnf /etc/mysql/my.cnf ~/.my.cnf 

Mysql will look for these and the later option values override anything set in the earlier file. Additional options files can be added using include directives:

It is possible to use !include directives in option files to include other option files and !includedir to search specific directories for option files. For example, to include the /home/mydir/myopt.cnf file, use the following directive:

!include /home/mydir/myopt.cnf

To search the /home/mydir directory and read option files found there, use this directive:

!includedir /home/mydir

MySQL makes no guarantee about the order in which option files in the directory will be read.

In the default LAMP installation,

  • there is no /etc/my.cnf.
  • /etc/mysql/my.cnf is a symlink to /etc/alternatives/my.cnf. That file only has a pair of include directives
!includedir /etc/mysql/conf.d/
!includedir /etc/mysql/mysql.conf.d/

The 2 directories contain 4 .cnf files:

 total 8.0K
 -rw-r--r-- 1 root root 8 Jan 21 2017 mysql.cnf
 -rw-r--r-- 1 root root 55 Jan 21 2017 mysqldump.cnf

 total 8.0K
 -rw-r--r-- 1 root root 3.0K Feb 3 2017 mysqld.cnf
 -rw-r--r-- 1 root root 21 Feb 3 2017 mysqld_safe_syslog.cnf

The /etc/mysql/conf.d/mysql.cnf file is empty. The /etc/mysql/mysql.conf.d/mysqld.cnf file seems to be the one with the relevant options. Following the documentation, I added stanzas for replica servers on different ports to the latter and restarted the mysql service… and nothing happens, as far as I can tell. systemd does not recognize the replica services at all.

Doing lots of googling, I’m not sure what is missing, but others seem to have the same problem.

Adventures in Linux hosting: Learning to use Docker

In the past, I’ve tried running multiple wikis from a single LAMP stack just using different databases for each one. The problem I ran into was that anything that caused one wiki to lock up mysql caused all of the wikis to lock up at the same time. This could happen during when some bot was ignoring robots.txt and when one of our extensions was using inefficient database calls. For example, the CACAO scoreboard can be expensive to regenerate, and classes using GONUTS could bring down a different wiki.

The solution a decade ago was to buy a different server for each wiki. This is still theoretically a solution, but it’s not practical without more funding than I have. Even when we had more money, we were splitting the wikis up between machines where each machine would run one high-traffic wiki (relatively) and other low-traffic wikis.

An alternative solution is to create multiple virtual machines and treat them as different servers. This can be done, but has efficiency issues. This leads to the solution that’s come up in recent years: multiple containers using Docker. This may not be the way I go in the end but I thought it would be interesting to learn more about how containers work.

Install Docker Community Edition

First, I need to install Docker for Ubuntu. I’m going to try to install from the Docker repositories using apt following the steps at Docker docs.

sudo apt-get install apt-transport-https ca-certificates curl software-properties-common I think the first step is needed to validate the source of the docker repo via certificates

All were already the latest version. Add Docker as a source of apt repos.

curl -fsSL | sudo apt-key add -
sudo add-apt-repository "deb [arch=amd64] \
   $(lsb_release -cs) stable"

Now we can see the docker repo has been added

sudo apt-get update
Hit:1 xenial InRelease
Get:2 xenial-updates InRelease [102 kB]
Get:3 xenial-security InRelease [102 kB]        
Get:4 xenial InRelease [49.8 kB]          
Get:5 xenial-backports InRelease [102 kB]
Get:6 xenial/stable amd64 Packages [3,150 B]
Fetched 359 kB in 0s (490 kB/s)                                                     
Reading package lists... Done
sudo apt-get install docker-ce

Check that it worked

$ sudo docker run hello-world
Unable to find image 'hello-world:latest' locally
latest: Pulling from library/hello-world
ca4f61b1923c: Pull complete 
Digest: sha256:445b2fe9afea8b4aa0b2f27fe49dd6ad130dfe7a8fd0832be5de99625dad47cd
Status: Downloaded newer image for hello-world:latest

Hello from Docker!
This message shows that your installation appears to be working correctly.

To generate this message, Docker took the following steps:
 1. The Docker client contacted the Docker daemon.
 2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
 3. The Docker daemon created a new container from that image which runs the
    executable that produces the output you are currently reading.
 4. The Docker daemon streamed that output to the Docker client, which sent it
    to your terminal.

To try something more ambitious, you can run an Ubuntu container with:

 $ docker run -it ubuntu bash

Share images, automate workflows, and more with a free Docker ID:

For more examples and ideas, visit:

Make sure Docker launches on reboot

sudo systemctl enable docker

restart the server to test this. Seems to work. They recommend preventing Docker from updating itself on production environments, but I’m going to defer that.

The docker is owned by root, so to run it without sudo I need to add myself to the docker group

sudo usermod -aG docker $USER

Running the tutorial

Part 2 of the Docker tutorial makes a small python app. I ran into the dreaded incorrect indentation problem with Python and realized that one thing with docker is that changing the source has no effect unless you rebuild the container.


Returning to the tutorial after spending some time trying unsuccessfully to get mysql instances working with systemd.

After debugging Python indentation, I was able to get the tutorial app running. To do stuff with containers, the docker command lets you manage all the containers or specific ones. From the experimentation I had done 4 days ago there were a bunch of containers that were not running cluttering things up. These  could be viewed with

docker ps -a

Containers can be removed individually using docker rm, but a shell script is needed for bulk cleanup, for example:

docker ps --filter "status=exited" | grep 'days ago' | awk '{print $1}' | xargs --no-run-if-empty docker rm

Docker commands are listed here.

Adventures in Linux hosting: Basic Mediawiki

The main purpose of the home setup is for testing our mediawiki setup. Our old sites are running old versions of Mediawiki on php5.6. Our custom extensions need to be updated to work with the latest MW and php7 in order to satisfy IT security, and on general principles.

Documentation on has some tips for running on Debian or Ubuntu. This is mostly not based on using the apt package manager, although some prereqs will be done with apt. In particular, it looks like I need:

  • memcached
  • imagemagick
  • php-apcu
  • elasticsearch (the ubuntu default package is 1.73, will need to do this differently later)

I downloaded the latest release (1.30.0) to my home directory


extracted this with tar -xzf and then used cp -R to make separate wikis inside the /var/www/html directory.

  • wiki – for general testing

Set this up as blank wikis first, so that I will get a clean copy of LocalSettings. The installer complained about not finding APCu, XCache, or WinCache. APCu appears to be the preferred choice. Added to the dependency list above. To get the installer to see this, apache had to be restarted. This is different than on using MacPorts on a Mac, of course.

sudo service apache2 restart

I now just get this warning, which I will ignore for the moment.

Warning: The intl PECL extension is not available to handle Unicode normalization, falling back to slow pure-PHP implementation.

Did the generic test wiki first.

  • Kept the default db name my_wiki for this one.
  • Storage engine InnoDB (default)
  • Database character set UTF-8 (default is binary, but this makes it hard to see things in phpmyadmin)
  • Pre-installed extensions
    • ImageMap
    • Interwiki
    • Parserfunctions
    • SyntaxHighlight_GeSHi
    • WikiEditor
  • Enable image uploads
  • PHP object caching. Our current wikis use memcached, which is a different option. This was set up a long time ago, and caching definitely affects performance, but APCu was not compared at the time.

Using the web based installer, I had to download LocalSettings.php to my laptop and the scp it over to the server. This isn’t a big deal, but I suspect there’s a way to do the whole thing from the terminal.

This seems to work. The next step, however is to figure out how to do it all again using Docker so I can have multiple containers running different wikis.

Adventures in Linux hosting: Misc. setup with apt

After setting up the new home linux box on Xmas eve, I’ve been gradually building it up to be a test bed for things I do at work. The apt package manager makes this pretty easy so far, compared to what I’ve done in the past.


Sorry vim folks, I prefer emacs for shell based editing (actually, I prefer BBEdit, which I can do via SFTP, but there are times when a local editor makes more sense).

sudo apt-get install emacs


This is a Cisco-compliant VPN client, which I’ll need if I want to connect through the firewall.

sudo apt-get install vpnc

This installs but I can’t seem to get a connection to TAMU’s vpn server. This may not be needed, though, since the TAMU enterprise Github doesn’t need VPN


I could have done this in the software selection but let’s go ahead and set up a basic LAMP webserver:

sudo apt-get install lamp-server^

The ^ character is required. This installs mysql, apache, and php7.0.  The html root appears to be in /var/www/html


  • sudo apt-get install phpmyadmin php-mbstring php-gettext


Yes, jbrowse is the new thing, but there are still some things I want to migrate from gbrowse. And there’s an apt package!

sudo apt-get install gbrowse

Important note, the URL is for gbrowse2! In my case, it’s http://<IP>/gbrowse2/

To do updates

I thought I had set it up to do this automatically, but it isn’t. So…

sudo apt update
sudo apt upgrade

Adventures in Linux hosting: Getting Ubuntu onto my home Dell T30

My MacBook is between the new Dell T30 and my monitor. You can see the USB stick attached to the MacBook where I burned the ubuntu ISO

Thanks to the USPS for delivering on a Sunday. The new T30 arrived today at around 10AM. As I suspected, the contents are just the main box and a power cord. I went to Best Buy and picked up the cheapest USB2 keyboard and mouse I could find (there were probably some of these lying around the lab, but I bought new). I borrowed the HDMI cable we were using for the Apple TV, and hooked up to a monitor.

Cntrl-alt-delete reboots. Holding down F2 gives the Dell system management software.

Set up a USB stick for installation

I found the basic installation documentation kind of confusing. So I started with the steps in this tutorial: Create a bootable USB stick, on macOS.

  • Download Ubuntu Server 16.04.3 LTS from:
  • Download Etcher. This is recommended for burning the image onto the USB stick. When Etcher is done, the MacBook complains about the inserted media not being readable. Just eject it.

Putting this stick in the USB port didn’t allow it to boot. But a clue comes from the last screenshot of the tutorial where the stick is shown as an EFI boot. Changed the Dell to look for UEFI boot. Now I get an installation option for Ubuntu when I reboot.

  • Used defaults except
    • to unmount before writing partitions.
    • automatically do security updates
  • Network failed until I plugged an ethernet cable into the back connected to the Airport Time Capsule.
  • Software selection
    • standard system utilities
    • OpenSSH server

After doing this and rebooting, I had to go back to the Dell system config to switch back to legacy boot instead of UEFI. But once I had done that, I get a boot into Ubuntu and I can ssh in using the local IP address.

Shut it down to move it to a better location in the living room instead of the dining room table. Now it’s running with no keyboard, mouse, or monitor and I can ssh into it from my MacBook.

Update: The Dell documentation talks about using their LifeCycle Controller to install an OS. But this doesn’t seem to come on the T30.

Adventures in Linux hosting: Development setup at home

One option to do my self-designed Linux education would be to do everything in VMWare. But while I’ve done that in the past, it hasn’t taught me what’s going on when we’ve had problems with our existing boxes where the IT people tell me that they had to do something with the kernel, or when I’ve had to reboot and watch the stream of warnings or worse on the local monitor before Ubuntu even loads.  I suspect that some of that has to do with a long-gone IT employee screwing up the initial installation, but I’d like to understand it better.

So, with computers being reasonably cheap, I decided it would be fun to set up a version of our websites on our home wifi network, not accessible to the outside world. Yesterday I ordered a Dell PowerEdge T30 Business Mini Tower Server System, and was pleasantly surprised that it had free shipping to arrive on Christmas Eve. The system comes with 8 Gb of RAM and a 1T HD. Eventually, I will probably upgrade both of those if I can (I think it’s possible with this model), but even with that base configuration it’s comparable to what we are currently using on our Macs. 

Debby has gone to visit family, leaving me to look after the cats in Texas. This will be an opportunity for a nerdy Xmas break! Off to read Ubuntu manuals


The first question even before the thing arrives is about the appropriate installation. The Pentium G4400 dual core processor on the box I ordered is cheaper than the Xeon alternative. It seems that it falls under the Intel EM64T category of processors, even though the string “EM64T” doesn’t show up on the Intel page for that CPU.

Time for me to get more serious about Linux

My group started into bioinformatics thanks to former students Hai Zhu and Leonardo Marino-Ramirez, who set up the first LAMP webserver in the lab, a box they made that we called Being Mac users, we thought that the Unix roots of OSX would be useful in the transition to doing informatics and web-based resources, so I purchased a G5 XServe  which started a series of machines we named based on protein quaternary structure. It was called As the EcoliHub/PortEco/EcoliWiki projects got funded and we got some stimulus money to work on B. subtilis, we gradually added to our collection of machines. trimer and tetramer were intel XServes. hexamer was the last version of the Apple Intel Xserve shipped by Apple. Meanwhile pentamer and heptamer were linux boxes running Ubuntu. For the most part the heavy work was done on the Macs, and we even moved GONUTS to run on a mac mini after one of the Intel Xserves died.

One of the things I liked about the XServe setup when we first got dimer was the way we could do server administration via the Server Admin and Workgroup Manager apps. But as the machines aged, and the older ones were not supported on newer OSX versions, Apple did something annoying: they made Server Admin incompatible with older OSX releases, even though it was pretty obvious that it was just a pretty front end to send unix commands and show the outputs in the GUI. So I gradually started learning how to do various system admin tasks via the terminal; there are some I’ve never figured out how to do completely without the GUI, though.

The rack mountable blade servers stopped supporting updates with Snow Leopard. We’ve kept the TAMU IT security people at bay by running MacPorts to replace obsolete packages, but it’s gotten to the point where time to give up on the Mac servers and migrate everything to Linux. My department prefers Ubuntu, so that’s the way I’m going to go.

In the long run I expect we will move from our own hardware to A&M server virtualization or maybe something like Amazon. But for now, I’m not comfortable with the capability and price for our own hosting, and there are issues with URLs and domains for moving some of our sites off campus.

Rain chains

We had rain chains installed as part of a recent remodel of the front of the house. It rained pretty hard last night and was still raining this morning. This video illustrates some problems with the installation.

It looks like the connection to the gutters isn’t actually feeding the flow onto the chain. This leads to a lot of splashing and erosion around the drain at the bottom.