Category Archives: computer stuff

Adventures in Linux hosting: Multiple MySQL instances

Reading up on mySQL, I realized that rather than using Docker or VMs, the simplest thing to do is run multiple mySQL instances from the default installation.  Because the new machines are Ubuntu Linux, this can be controlled from systemd, the daemon that controls services.

There is also documentation on about using Docker and there are some concerns with containers in general and Docker in particular:

Docker containers are in principle ephemeral, and any data or configuration are expected to be lost if the container is deleted or corrupted (see discussions here). Docker volumes, however, provides a mechanism to persist data created inside a Docker container.

and it’s also not clear to me whether there are issues related to security and mysql root password storage. These are probably features for development environments, but we want a more production-like setup for the public wikis.

So, let’s try the multiple instances via systemd first and see if that works.

mysql service

On ubuntu, it seems that the service is mysql (not mysqld) and systemd controls it via

sudo service mysql stop|start|etc

mySQL configuration

mySQL configuration files are often located in obscure places, and when I was installing from a distribution on Macs, the my.cnf files were sometimes hard to find. We can ask the mysql daemon where it’s looking for configuration files:

Usage: mysqld [OPTIONS]

Default options are read from the following files in the given order:
/etc/my.cnf /etc/mysql/my.cnf ~/.my.cnf 

Mysql will look for these and the later option values override anything set in the earlier file. Additional options files can be added using include directives:

It is possible to use !include directives in option files to include other option files and !includedir to search specific directories for option files. For example, to include the /home/mydir/myopt.cnf file, use the following directive:

!include /home/mydir/myopt.cnf

To search the /home/mydir directory and read option files found there, use this directive:

!includedir /home/mydir

MySQL makes no guarantee about the order in which option files in the directory will be read.

In the default LAMP installation,

  • there is no /etc/my.cnf.
  • /etc/mysql/my.cnf is a symlink to /etc/alternatives/my.cnf. That file only has a pair of include directives
!includedir /etc/mysql/conf.d/
!includedir /etc/mysql/mysql.conf.d/

The 2 directories contain 4 .cnf files:

 total 8.0K
 -rw-r--r-- 1 root root 8 Jan 21 2017 mysql.cnf
 -rw-r--r-- 1 root root 55 Jan 21 2017 mysqldump.cnf

 total 8.0K
 -rw-r--r-- 1 root root 3.0K Feb 3 2017 mysqld.cnf
 -rw-r--r-- 1 root root 21 Feb 3 2017 mysqld_safe_syslog.cnf

The /etc/mysql/conf.d/mysql.cnf file is empty. The /etc/mysql/mysql.conf.d/mysqld.cnf file seems to be the one with the relevant options. Following the documentation, I added stanzas for replica servers on different ports to the latter and restarted the mysql service… and nothing happens, as far as I can tell. systemd does not recognize the replica services at all.

Doing lots of googling, I’m not sure what is missing, but others seem to have the same problem.

Adventures in Linux hosting: Learning to use Docker

In the past, I’ve tried running multiple wikis from a single LAMP stack just using different databases for each one. The problem I ran into was that anything that caused one wiki to lock up mysql caused all of the wikis to lock up at the same time. This could happen during when some bot was ignoring robots.txt and when one of our extensions was using inefficient database calls. For example, the CACAO scoreboard can be expensive to regenerate, and classes using GONUTS could bring down a different wiki.

The solution a decade ago was to buy a different server for each wiki. This is still theoretically a solution, but it’s not practical without more funding than I have. Even when we had more money, we were splitting the wikis up between machines where each machine would run one high-traffic wiki (relatively) and other low-traffic wikis.

An alternative solution is to create multiple virtual machines and treat them as different servers. This can be done, but has efficiency issues. This leads to the solution that’s come up in recent years: multiple containers using Docker. This may not be the way I go in the end but I thought it would be interesting to learn more about how containers work.

Install Docker Community Edition

First, I need to install Docker for Ubuntu. I’m going to try to install from the Docker repositories using apt following the steps at Docker docs.

sudo apt-get install apt-transport-https ca-certificates curl software-properties-common I think the first step is needed to validate the source of the docker repo via certificates

All were already the latest version. Add Docker as a source of apt repos.

curl -fsSL | sudo apt-key add -
sudo add-apt-repository "deb [arch=amd64] \
   $(lsb_release -cs) stable"

Now we can see the docker repo has been added

sudo apt-get update
Hit:1 xenial InRelease
Get:2 xenial-updates InRelease [102 kB]
Get:3 xenial-security InRelease [102 kB]        
Get:4 xenial InRelease [49.8 kB]          
Get:5 xenial-backports InRelease [102 kB]
Get:6 xenial/stable amd64 Packages [3,150 B]
Fetched 359 kB in 0s (490 kB/s)                                                     
Reading package lists... Done
sudo apt-get install docker-ce

Check that it worked

$ sudo docker run hello-world
Unable to find image 'hello-world:latest' locally
latest: Pulling from library/hello-world
ca4f61b1923c: Pull complete 
Digest: sha256:445b2fe9afea8b4aa0b2f27fe49dd6ad130dfe7a8fd0832be5de99625dad47cd
Status: Downloaded newer image for hello-world:latest

Hello from Docker!
This message shows that your installation appears to be working correctly.

To generate this message, Docker took the following steps:
 1. The Docker client contacted the Docker daemon.
 2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
 3. The Docker daemon created a new container from that image which runs the
    executable that produces the output you are currently reading.
 4. The Docker daemon streamed that output to the Docker client, which sent it
    to your terminal.

To try something more ambitious, you can run an Ubuntu container with:

 $ docker run -it ubuntu bash

Share images, automate workflows, and more with a free Docker ID:

For more examples and ideas, visit:

Make sure Docker launches on reboot

sudo systemctl enable docker

restart the server to test this. Seems to work. They recommend preventing Docker from updating itself on production environments, but I’m going to defer that.

The docker is owned by root, so to run it without sudo I need to add myself to the docker group

sudo usermod -aG docker $USER

Running the tutorial

Part 2 of the Docker tutorial makes a small python app. I ran into the dreaded incorrect indentation problem with Python and realized that one thing with docker is that changing the source has no effect unless you rebuild the container.


Returning to the tutorial after spending some time trying unsuccessfully to get mysql instances working with systemd.

After debugging Python indentation, I was able to get the tutorial app running. To do stuff with containers, the docker command lets you manage all the containers or specific ones. From the experimentation I had done 4 days ago there were a bunch of containers that were not running cluttering things up. These  could be viewed with

docker ps -a

Containers can be removed individually using docker rm, but a shell script is needed for bulk cleanup, for example:

docker ps --filter "status=exited" | grep 'days ago' | awk '{print $1}' | xargs --no-run-if-empty docker rm

Docker commands are listed here.

Adventures in Linux hosting: Misc. setup with apt

After setting up the new home linux box on Xmas eve, I’ve been gradually building it up to be a test bed for things I do at work. The apt package manager makes this pretty easy so far, compared to what I’ve done in the past.


Sorry vim folks, I prefer emacs for shell based editing (actually, I prefer BBEdit, which I can do via SFTP, but there are times when a local editor makes more sense).

sudo apt-get install emacs


This is a Cisco-compliant VPN client, which I’ll need if I want to connect through the firewall.

sudo apt-get install vpnc

This installs but I can’t seem to get a connection to TAMU’s vpn server. This may not be needed, though, since the TAMU enterprise Github doesn’t need VPN


I could have done this in the software selection but let’s go ahead and set up a basic LAMP webserver:

sudo apt-get install lamp-server^

The ^ character is required. This installs mysql, apache, and php7.0.  The html root appears to be in /var/www/html


  • sudo apt-get install phpmyadmin php-mbstring php-gettext


Yes, jbrowse is the new thing, but there are still some things I want to migrate from gbrowse. And there’s an apt package!

sudo apt-get install gbrowse

Important note, the URL is for gbrowse2! In my case, it’s http://<IP>/gbrowse2/

To do updates

I thought I had set it up to do this automatically, but it isn’t. So…

sudo apt update
sudo apt upgrade

Adventures in Linux hosting: Getting Ubuntu onto my home Dell T30

My MacBook is between the new Dell T30 and my monitor. You can see the USB stick attached to the MacBook where I burned the ubuntu ISO

Thanks to the USPS for delivering on a Sunday. The new T30 arrived today at around 10AM. As I suspected, the contents are just the main box and a power cord. I went to Best Buy and picked up the cheapest USB2 keyboard and mouse I could find (there were probably some of these lying around the lab, but I bought new). I borrowed the HDMI cable we were using for the Apple TV, and hooked up to a monitor.

Cntrl-alt-delete reboots. Holding down F2 gives the Dell system management software.

Set up a USB stick for installation

I found the basic installation documentation kind of confusing. So I started with the steps in this tutorial: Create a bootable USB stick, on macOS.

  • Download Ubuntu Server 16.04.3 LTS from:
  • Download Etcher. This is recommended for burning the image onto the USB stick. When Etcher is done, the MacBook complains about the inserted media not being readable. Just eject it.

Putting this stick in the USB port didn’t allow it to boot. But a clue comes from the last screenshot of the tutorial where the stick is shown as an EFI boot. Changed the Dell to look for UEFI boot. Now I get an installation option for Ubuntu when I reboot.

  • Used defaults except
    • to unmount before writing partitions.
    • automatically do security updates
  • Network failed until I plugged an ethernet cable into the back connected to the Airport Time Capsule.
  • Software selection
    • standard system utilities
    • OpenSSH server

After doing this and rebooting, I had to go back to the Dell system config to switch back to legacy boot instead of UEFI. But once I had done that, I get a boot into Ubuntu and I can ssh in using the local IP address.

Shut it down to move it to a better location in the living room instead of the dining room table. Now it’s running with no keyboard, mouse, or monitor and I can ssh into it from my MacBook.

Update: The Dell documentation talks about using their LifeCycle Controller to install an OS. But this doesn’t seem to come on the T30.

Adventures in Linux hosting: Development setup at home

One option to do my self-designed Linux education would be to do everything in VMWare. But while I’ve done that in the past, it hasn’t taught me what’s going on when we’ve had problems with our existing boxes where the IT people tell me that they had to do something with the kernel, or when I’ve had to reboot and watch the stream of warnings or worse on the local monitor before Ubuntu even loads.  I suspect that some of that has to do with a long-gone IT employee screwing up the initial installation, but I’d like to understand it better.

So, with computers being reasonably cheap, I decided it would be fun to set up a version of our websites on our home wifi network, not accessible to the outside world. Yesterday I ordered a Dell PowerEdge T30 Business Mini Tower Server System, and was pleasantly surprised that it had free shipping to arrive on Christmas Eve. The system comes with 8 Gb of RAM and a 1T HD. Eventually, I will probably upgrade both of those if I can (I think it’s possible with this model), but even with that base configuration it’s comparable to what we are currently using on our Macs. 

Debby has gone to visit family, leaving me to look after the cats in Texas. This will be an opportunity for a nerdy Xmas break! Off to read Ubuntu manuals


The first question even before the thing arrives is about the appropriate installation. The Pentium G4400 dual core processor on the box I ordered is cheaper than the Xeon alternative. It seems that it falls under the Intel EM64T category of processors, even though the string “EM64T” doesn’t show up on the Intel page for that CPU.

Time for me to get more serious about Linux

My group started into bioinformatics thanks to former students Hai Zhu and Leonardo Marino-Ramirez, who set up the first LAMP webserver in the lab, a box they made that we called Being Mac users, we thought that the Unix roots of OSX would be useful in the transition to doing informatics and web-based resources, so I purchased a G5 XServe  which started a series of machines we named based on protein quaternary structure. It was called As the EcoliHub/PortEco/EcoliWiki projects got funded and we got some stimulus money to work on B. subtilis, we gradually added to our collection of machines. trimer and tetramer were intel XServes. hexamer was the last version of the Apple Intel Xserve shipped by Apple. Meanwhile pentamer and heptamer were linux boxes running Ubuntu. For the most part the heavy work was done on the Macs, and we even moved GONUTS to run on a mac mini after one of the Intel Xserves died.

One of the things I liked about the XServe setup when we first got dimer was the way we could do server administration via the Server Admin and Workgroup Manager apps. But as the machines aged, and the older ones were not supported on newer OSX versions, Apple did something annoying: they made Server Admin incompatible with older OSX releases, even though it was pretty obvious that it was just a pretty front end to send unix commands and show the outputs in the GUI. So I gradually started learning how to do various system admin tasks via the terminal; there are some I’ve never figured out how to do completely without the GUI, though.

The rack mountable blade servers stopped supporting updates with Snow Leopard. We’ve kept the TAMU IT security people at bay by running MacPorts to replace obsolete packages, but it’s gotten to the point where time to give up on the Mac servers and migrate everything to Linux. My department prefers Ubuntu, so that’s the way I’m going to go.

In the long run I expect we will move from our own hardware to A&M server virtualization or maybe something like Amazon. But for now, I’m not comfortable with the capability and price for our own hosting, and there are issues with URLs and domains for moving some of our sites off campus.

MacBook Pro connection confusion

Migrating from my old MacBook Air to my new 2016 Macbook Pro has involved some confusion about adapters and accessories. Overall, I like my new Macbook, but there have been a number of annoying things. The biggest is still Apple’s decision to kill the MagSafe power connector. More minor annoyances:

  • Out of the box, the brick used to come with a 3 prong extension in addition to the stubby 2 prong power connection. Now it’s extra.  The longer cable on the brick end is really valuable when you have a bunch of people (e.g. at a conference or students in a class, or even at an airport waiting area) sharing a wall plug or power strip. Fortunately, I can recycle a bunch of these from my old power bricks that don’t work anymore for the new MacBook.
  • If you buy the power brick you now have to buy a separate USB-C charge cable.  As far as I can tell, this should work with a generic USB-C cable. What was annoying here was buying a power supply at an Apple Store and not having the Apple employee ask if I needed the cable.
  • The original Thunderbolt was a superset of MiniDisplayPort, and Thunderbolt 1 and 2 used miniDisplayPort connectors.  There is a Thunderbolt 3/USB-C to Thunderbolt 2 adaptor, but although Thunderbolt 2 is a superset of MiniDisplayPort, the adaptor works for Thunderbolt connections but not for miniDisplayPort-based adaptors. In other words, plugging a Thunderbolt Display (discontinued in 2016, but some of us still have various versions) in works. What doesn’t work is MacBook-Thunderbolt adaptor-miniDisplayPort to VGA or HDMI adaptor-monitor.
  • If you want to sync an iPhone or an iPad to your MacBook Pro, you will need either a USB-C to lightning cable or a USB-C to USB-A adaptor. This means that if you buy a brand new MacBook Pro and a brand new iPad, you can’t connect them right out of the box. The USB-C to Lightning will allow you to use the MacBook power brick to charge an iPhone or iPad with or without the MacBook in the middle, so that could reduce the number of things.


RIP MagSafe power

I just got a new Slate Gray 13″ MacBook Pro with the touch bar. Overall I’m sure I’m going to like it just fine, but I have to say this:

Dear Tim Cook,

The MagSafe power connector was probably one of the best things you ever did for laptops… and it’s really annoying that you’ve killed it in the new MacBook Pro.


Slightly less annoying is that while the USB-C to Thunderbolt 2 adaptor fortunately worked for mounting the old laptop in Target mode, it doesn’t seem to work as a Mini DiplayPort.