Colombian Empanadas

Ever since I was introduced to the Colombian version of empanadas, I’ve been meaning to try making my own. For some reason this urge struck me again last month and I decided to go for it. This is actually the result from the second time, served with cole slaw and Aji (a sauce make with chilis, cilantro, vinegar and other stuff). The basic recipe for both the empanadas and the Aji are from My Colombian Kitchen.

I had been worried about the difficulties of working with the fragile masa based dough. The key trick (which I’m sure is not needed by the experts who make them more often) is to roll the dough between two sheets of plastic. Here, I’ve cut the edges of a freezer bag which won’t wrinkle like saran wrap. I buttered the surfaces with a thin film of pork fat to help the dough release from the plastic.

While the circle of dough is still supported by the plastic add a dollop of filling. I used all ground beef vs the beef and pork mix in the recipe. The filling is basically meat cooked in a sofrito mixed with mashed potatoes (without the milk or butter). I did the sofrito from scratch but I bet it would work with the pre-made sofrito that comes in jars.

I used more meat than the recipe called for because it was a 1 lb tube of frozen ground beef from Rosenthal, and who wants to deal with half a pound of leftover previously frozen raw ground beef. Now I have leftover empanada filling in the fridge, but it’s cooked.

Then fold the whole thing and crimp the edges while it’s still on the plastic!

Released from the plastic and staged on waxed paper. Not that pretty but not bad for just the second time.

Deep fried for a few minutes at 350F. I used our wok to minimize the amount of oil needed to submerge them when cooking 3-4 at a time. This ended up being just a couple of cups.

WP Pubmed Reflist tests

Got some bug reports this week and am procrastinating other stuff by looking at them.

The content of this post will change as I use it to reply to this

update Dec 30 8:26PM

2018

[pmid-refs key=redenti_18 wrap=ol showlink=false]

2017

[pmid-refs key=redenti_17 wrap=ol showlink=false]

2016

[pmid-refs key=redenti_16 wrap=ol showlink=false]

other

[pmid-refs key=27590350 wrap=ul]

[pmid-refs key=27879490 wrap=ul]

Godaddy weirdness

This site was offline for a week or so and I couldn’t figure out what was going on. It looks like what happened was that somehow the IP address in the A record was changed to 173.212.241.155, which is somewhere in Europe, instead of the correct address listed for the basic web hosting at Godaddy. Once I changed it back, it was just a matter of letting caches clear up and now I’m back.

Waiting for customer support chat to see if they can explain it, but I might give up on the wait…

The previous support person suggested that I needed to activate SSH access on the account, which I did… but despite waiting more than the recommended 72 hours, I was not able to log in via SSH, although I was able to do an SFTP connection via Filezilla (but not with BBEdit).

phpmyadmin hybrid install (apt + manual) on Ubuntu

While migrating our websites to the new Ubuntu box, I set up phpmyadmin using apt-get. But that package is 4.5x while the latest version is 4.8x. Not surprisingly, the Nessus security scan needed to get http and https ports opened through the TAMU firewall detected security exploits that I hope have been fixed in the newer versions.

One option would be to just install phpmyadmin directly in /var/www/html. But I wanted to keep all the customization I had already done, and I liked having the config files in locations outside the web root so they can’t be viewed as easily by intruders.

There is probably documentation about the package somewhere, but I don’t know how to find it, so I just dug around and tried stuff to get it to work. Here’s what I figured out and what I did. phpmyadmin from the apt package installs stuff in at least 3 places

  • /usr/share/phpmyadmin – this is the bulk of the codebase
  • /etc/phpmyadmin – conf files for apache and for phpmyadmin
  • /var/lib/phpmyadmin – misc stuff, including the blowfish secret setting and a tmp directory accessible to apache

To get it to work:

  • Downloaded the latest version using wget. I did this in a user directory, but it could be anywhere
  • Rename the directory /usr/share/phpmyadmin to /usr/share/phpmyadmin.apt as a backup. Maybe if the phpmyadmin distro ever catches up we can reactivate it
  • symlink the new version as /usr/share/phpmyadmin
  • write a tiny config.inc.php file to include /etc/phpmyadmin/config.inc.php (this may not get the all the config files, but it seems to work)
  • edit /var/lib/phpmyadmin/blowfish_secret.inc.php to make the string longer
  • edit /usr/share/phpmyadmin/libraries/vendor_config.php to set the tmp dir to /var/lib/phpmyadmin/tmp

Multiple Docker mysql instances on different ports

Found a tutorial gives the following example:

$ docker run -d –name=new-mysql -p 6604:3306 -v /storage/docker/mysql-datadir:/var/lib/mysql mysql

let’s dissect what we’re passing to docker

  • -d is short for –detach. This runs the container as a daemon in the background
  • –name is the name of the container
  • -p is short for –publish. This is a port mapping where the standard mysql port for the container (3306) appears as a different port (6604) on the local machine
  • -v is short for –volume. This tells the container to use a path outside the container for storing the mysql stuff, including data. This is needed so that the data persists if the container goes away.

Tweaking this to use my setup exited with an error because the mySQL root password was not set. After struggling with an error that came from invisible crud coming from copy paste (sigh), I got this to work, where mypassword is replaced by a real password.

docker run -d –name=mysql-rep01 -p 3307:3306 –env “MYSQL_ROOT_PASSWORD=mypassword” -v /var/lib/mysql-replica01:/var/lib/mysql mysql

To connect to this mySQL instance, I need to specify the host with -h and the port to use with -P.

mysql -u root -p -P 3307 -h 127.0.0.1

This doesn’t work if I use localhost instead of 127.0.0.1.

Connect it to a copy of phpMyAdmin

The Ubuntu apt installation arranges things a little differently from a manual installation. The configuration is inside /etc/phpmyadmin. In addition to the usual config.inc.php, there’s a directory called conf.d; anything with a .php extension is included. This allows you to create separate files for each database container.  I made one called config3307.inc.php. I took out a bunch of statements that rely on global variables $dbname and $dbserver. I think this is similar to the stuff that’s commented out in config.inc.php, but not that the port has to be set.

<?php
/**
 * Alternate configuration file
 * This file copies a central block from config.inc.php to manage connection 
 * to an alternative db server in a Docker container publishing on local port 3307
 */

/* Authentication type */
 $cfg['Servers'][$i]['auth_type'] = 'cookie';
 $cfg['Servers'][$i]['host'] = '127.0.0.1';

 $cfg['Servers'][$i]['connect_type'] = 'tcp';
 $cfg['Servers'][$i]['port'] = '3307';
 //$cfg['Servers'][$i]['compress'] = false;
 /* Select mysqli if your server has it */
 $cfg['Servers'][$i]['extension'] = 'mysqli';
 /* Optional: User for advanced features */
 $cfg['Servers'][$i]['controluser'] = $dbuser;
 $cfg['Servers'][$i]['controlpass'] = $dbpass;
 /* Optional: Advanced phpMyAdmin features */
 $cfg['Servers'][$i]['pmadb'] = $dbname;
 $cfg['Servers'][$i]['bookmarktable'] = 'pma__bookmark';
 $cfg['Servers'][$i]['relation'] = 'pma__relation';
 $cfg['Servers'][$i]['table_info'] = 'pma__table_info';
 $cfg['Servers'][$i]['table_coords'] = 'pma__table_coords';
 $cfg['Servers'][$i]['pdf_pages'] = 'pma__pdf_pages';
 $cfg['Servers'][$i]['column_info'] = 'pma__column_info';
 $cfg['Servers'][$i]['history'] = 'pma__history';
 $cfg['Servers'][$i]['table_uiprefs'] = 'pma__table_uiprefs';
 $cfg['Servers'][$i]['tracking'] = 'pma__tracking';
 $cfg['Servers'][$i]['userconfig'] = 'pma__userconfig';
 $cfg['Servers'][$i]['recent'] = 'pma__recent';
 $cfg['Servers'][$i]['favorite'] = 'pma__favorite';
 $cfg['Servers'][$i]['users'] = 'pma__users';
 $cfg['Servers'][$i]['usergroups'] = 'pma__usergroups';
 $cfg['Servers'][$i]['navigationhiding'] = 'pma__navigationhiding';
 $cfg['Servers'][$i]['savedsearches'] = 'pma__savedsearches';
 $cfg['Servers'][$i]['central_columns'] = 'pma__central_columns';
 $cfg['Servers'][$i]['designer_settings'] = 'pma__designer_settings';
 $cfg['Servers'][$i]['export_templates'] = 'pma__export_templates';

/* Uncomment the following to enable logging in to passwordless accounts,
 * after taking note of the associated security risks. */
 // $cfg['Servers'][$i]['AllowNoPassword'] = TRUE;

/* Advance to next server for rest of config */
 $i++;
Note the pulldown to go between the two servers

Then I had to create the phpmyadmin user and the create the tables for the advanced phpmyadmin features. I just imported the SQL file from phpmyadmin sql/create_tables.sql. Now it looks like this:

Connect it to a copy of MediaWiki

Created another MW directory on the host to use the containerized database. I just did the web install but use 127.0.0.1:3307 as the database location. The installer works and the correct tables are created. Copied LocalSettings to the correct location and it seems to work as a blank wiki. In LocalSettings, we see:

$wgDBserver = "127.0.0.1:3307";

For the port publishing done by the Docker container.

Make it restart automatically

add the somewhere before the image name.

--restart unless-stopped flag

Make it a service?

So far I’ve made a container that I can launch from the command line. The Docker docs talk about how:

Services are really just “containers in production.” A service only runs one image, but it codifies the way that image runs—what ports it should use, how many replicas of the container should run so the service has the capacity it needs, and so on.

Note that the rest of that tutorial goes beyond running a container in production. It sets up to run multiple instances that do load balancing between them, That would be nice but I’m going to save that for another time.

Data persistence!

 

Adventures in Linux hosting: Multiple MySQL instances

Reading up on mySQL, I realized that rather than using Docker or VMs, the simplest thing to do is run multiple mySQL instances from the default installation.  Because the new machines are Ubuntu Linux, this can be controlled from systemd, the daemon that controls services.

There is also documentation on mysql.com about using Docker and there are some concerns with containers in general and Docker in particular:

Docker containers are in principle ephemeral, and any data or configuration are expected to be lost if the container is deleted or corrupted (see discussions here). Docker volumes, however, provides a mechanism to persist data created inside a Docker container.

and it’s also not clear to me whether there are issues related to security and mysql root password storage. These are probably features for development environments, but we want a more production-like setup for the public wikis.

So, let’s try the multiple instances via systemd first and see if that works.

mysql service

On ubuntu, it seems that the service is mysql (not mysqld) and systemd controls it via

sudo service mysql stop|start|etc

mySQL configuration

mySQL configuration files are often located in obscure places, and when I was installing from a distribution on Macs, the my.cnf files were sometimes hard to find. We can ask the mysql daemon where it’s looking for configuration files:

Usage: mysqld [OPTIONS]

Default options are read from the following files in the given order:
/etc/my.cnf /etc/mysql/my.cnf ~/.my.cnf 

Mysql will look for these and the later option values override anything set in the earlier file. Additional options files can be added using include directives:

It is possible to use !include directives in option files to include other option files and !includedir to search specific directories for option files. For example, to include the /home/mydir/myopt.cnf file, use the following directive:

!include /home/mydir/myopt.cnf

To search the /home/mydir directory and read option files found there, use this directive:

!includedir /home/mydir

MySQL makes no guarantee about the order in which option files in the directory will be read.

In the default LAMP installation,

  • there is no /etc/my.cnf.
  • /etc/mysql/my.cnf is a symlink to /etc/alternatives/my.cnf. That file only has a pair of include directives
!includedir /etc/mysql/conf.d/
!includedir /etc/mysql/mysql.conf.d/

The 2 directories contain 4 .cnf files:

conf.d:
 total 8.0K
 -rw-r--r-- 1 root root 8 Jan 21 2017 mysql.cnf
 -rw-r--r-- 1 root root 55 Jan 21 2017 mysqldump.cnf

mysql.conf.d:
 total 8.0K
 -rw-r--r-- 1 root root 3.0K Feb 3 2017 mysqld.cnf
 -rw-r--r-- 1 root root 21 Feb 3 2017 mysqld_safe_syslog.cnf

The /etc/mysql/conf.d/mysql.cnf file is empty. The /etc/mysql/mysql.conf.d/mysqld.cnf file seems to be the one with the relevant options. Following the documentation, I added stanzas for replica servers on different ports to the latter and restarted the mysql service… and nothing happens, as far as I can tell. systemd does not recognize the replica services at all.

Doing lots of googling, I’m not sure what is missing, but others seem to have the same problem.

Adventures in Linux hosting: Learning to use Docker

In the past, I’ve tried running multiple wikis from a single LAMP stack just using different databases for each one. The problem I ran into was that anything that caused one wiki to lock up mysql caused all of the wikis to lock up at the same time. This could happen during when some bot was ignoring robots.txt and when one of our extensions was using inefficient database calls. For example, the CACAO scoreboard can be expensive to regenerate, and classes using GONUTS could bring down a different wiki.

The solution a decade ago was to buy a different server for each wiki. This is still theoretically a solution, but it’s not practical without more funding than I have. Even when we had more money, we were splitting the wikis up between machines where each machine would run one high-traffic wiki (relatively) and other low-traffic wikis.

An alternative solution is to create multiple virtual machines and treat them as different servers. This can be done, but has efficiency issues. This leads to the solution that’s come up in recent years: multiple containers using Docker. This may not be the way I go in the end but I thought it would be interesting to learn more about how containers work.

Install Docker Community Edition

First, I need to install Docker for Ubuntu. I’m going to try to install from the Docker repositories using apt following the steps at Docker docs.

sudo apt-get install apt-transport-https ca-certificates curl software-properties-common I think the first step is needed to validate the source of the docker repo via certificates

All were already the latest version. Add Docker as a source of apt repos.

curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu \
   $(lsb_release -cs) stable"

Now we can see the docker repo has been added

sudo apt-get update
Hit:1 http://us.archive.ubuntu.com/ubuntu xenial InRelease
Get:2 http://us.archive.ubuntu.com/ubuntu xenial-updates InRelease [102 kB]
Get:3 http://security.ubuntu.com/ubuntu xenial-security InRelease [102 kB]        
Get:4 https://download.docker.com/linux/ubuntu xenial InRelease [49.8 kB]          
Get:5 http://us.archive.ubuntu.com/ubuntu xenial-backports InRelease [102 kB]
Get:6 https://download.docker.com/linux/ubuntu xenial/stable amd64 Packages [3,150 B]
Fetched 359 kB in 0s (490 kB/s)                                                     
Reading package lists... Done
sudo apt-get install docker-ce

Check that it worked

$ sudo docker run hello-world
Unable to find image 'hello-world:latest' locally
latest: Pulling from library/hello-world
ca4f61b1923c: Pull complete 
Digest: sha256:445b2fe9afea8b4aa0b2f27fe49dd6ad130dfe7a8fd0832be5de99625dad47cd
Status: Downloaded newer image for hello-world:latest

Hello from Docker!
This message shows that your installation appears to be working correctly.

To generate this message, Docker took the following steps:
 1. The Docker client contacted the Docker daemon.
 2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
    (amd64)
 3. The Docker daemon created a new container from that image which runs the
    executable that produces the output you are currently reading.
 4. The Docker daemon streamed that output to the Docker client, which sent it
    to your terminal.

To try something more ambitious, you can run an Ubuntu container with:

 $ docker run -it ubuntu bash

Share images, automate workflows, and more with a free Docker ID:
 https://cloud.docker.com/

For more examples and ideas, visit:
 https://docs.docker.com/engine/userguide/

Make sure Docker launches on reboot

sudo systemctl enable docker

restart the server to test this. Seems to work. They recommend preventing Docker from updating itself on production environments, but I’m going to defer that.

The docker is owned by root, so to run it without sudo I need to add myself to the docker group

sudo usermod -aG docker $USER

Running the tutorial

Part 2 of the Docker tutorial makes a small python app. I ran into the dreaded incorrect indentation problem with Python and realized that one thing with docker is that changing the source has no effect unless you rebuild the container.

update:

Returning to the tutorial after spending some time trying unsuccessfully to get mysql instances working with systemd.

After debugging Python indentation, I was able to get the tutorial app running. To do stuff with containers, the docker command lets you manage all the containers or specific ones. From the experimentation I had done 4 days ago there were a bunch of containers that were not running cluttering things up. These  could be viewed with

docker ps -a

Containers can be removed individually using docker rm, but a shell script is needed for bulk cleanup, for example:

docker ps --filter "status=exited" | grep 'days ago' | awk '{print $1}' | xargs --no-run-if-empty docker rm

Docker commands are listed here.

Adventures in Linux hosting: Basic Mediawiki

The main purpose of the home setup is for testing our mediawiki setup. Our old sites are running old versions of Mediawiki on php5.6. Our custom extensions need to be updated to work with the latest MW and php7 in order to satisfy IT security, and on general principles.

Documentation on MediaWiki.org has some tips for running on Debian or Ubuntu. This is mostly not based on using the apt package manager, although some prereqs will be done with apt. In particular, it looks like I need:

  • memcached
  • imagemagick
  • php-apcu
  • elasticsearch (the ubuntu default package is 1.73, will need to do this differently later)

I downloaded the latest release (1.30.0) to my home directory

wget https://releases.wikimedia.org/mediawiki/1.30/mediawiki-1.30.0.tar.gz

extracted this with tar -xzf and then used cp -R to make separate wikis inside the /var/www/html directory.

  • wiki – for general testing

Set this up as blank wikis first, so that I will get a clean copy of LocalSettings. The installer complained about not finding APCu, XCache, or WinCache. APCu appears to be the preferred choice. Added to the dependency list above. To get the installer to see this, apache had to be restarted. This is different than on using MacPorts on a Mac, of course.

sudo service apache2 restart

I now just get this warning, which I will ignore for the moment.

Warning: The intl PECL extension is not available to handle Unicode normalization, falling back to slow pure-PHP implementation.

Did the generic test wiki first.

  • Kept the default db name my_wiki for this one.
  • Storage engine InnoDB (default)
  • Database character set UTF-8 (default is binary, but this makes it hard to see things in phpmyadmin)
  • Pre-installed extensions
    • ImageMap
    • Interwiki
    • Parserfunctions
    • SyntaxHighlight_GeSHi
    • WikiEditor
  • Enable image uploads
  • PHP object caching. Our current wikis use memcached, which is a different option. This was set up a long time ago, and caching definitely affects performance, but APCu was not compared at the time.

Using the web based installer, I had to download LocalSettings.php to my laptop and the scp it over to the server. This isn’t a big deal, but I suspect there’s a way to do the whole thing from the terminal.

This seems to work. The next step, however is to figure out how to do it all again using Docker so I can have multiple containers running different wikis. Until then I’ll solve some nyt crossword puzzles.

Adventures in Linux hosting: Misc. setup with apt

After setting up the new home linux box on Xmas eve, I’ve been gradually building it up to be a test bed for things I do at work. The apt package manager makes this pretty easy so far, compared to what I’ve done in the past.

emacs

Sorry vim folks, I prefer emacs for shell based editing (actually, I prefer BBEdit, which I can do via SFTP, but there are times when a local editor makes more sense).

sudo apt-get install emacs

vpnc

This is a Cisco-compliant VPN client, which I’ll need if I want to connect through the firewall.

sudo apt-get install vpnc

This installs but I can’t seem to get a connection to TAMU’s vpn server. This may not be needed, though, since the TAMU enterprise Github doesn’t need VPN

LAMP

I could have done this in the software selection but let’s go ahead and set up a basic LAMP webserver:

sudo apt-get install lamp-server^

The ^ character is required. This installs mysql, apache, and php7.0.  The html root appears to be in /var/www/html

phpmyadmin

  • sudo apt-get install phpmyadmin php-mbstring php-gettext

gbrowse

Yes, jbrowse is the new thing, but there are still some things I want to migrate from gbrowse. And there’s an apt package!

sudo apt-get install gbrowse

Important note, the URL is for gbrowse2! In my case, it’s http://<IP>/gbrowse2/

To do updates

I thought I had set it up to do this automatically, but it isn’t. So…

sudo apt update
sudo apt upgrade