Remarkable 2 - First Impressions

A new writing and reading experience

After all this years working with computers professionally, I am still a fond lover of handwriting. I consider writing by hand a cultural technique worth preserving and I admire people who do have a beautiful style of writing. Unfortunately my handwriting is sloppy and hard to read for others and sometimes even for myself. Nevertheless, investing the time to write a thought down manually allows me to retain the information longer. Although I use a simple mark down based text file for keeping notes at work, I also use a classic notebook and Tombow caligraphy pens for jotting down ideas and tasks. Obviously making little sketches and drawings is also more convenient using a regular pen and paper approach.

A friend of mine is using a remarkable 1 and is quite happy with it. It seemed lightweight and I liked the idea of having digitized notes with a proper look and feel of paper. I also have seen other friends using a tablet like the iPad for taking handwritten notes with a special pen. Using a tablet with its glossy surface did not seem to appealing to me, although an Android tablet would of course offer a completely different set of possibilities. However I noticed that writing on a tablet seems to involve a lot of navigating and selecting functions before the actual writing takes place. Also it is of course a permanent distraction, the same as a regular laptop would be in a meeting. It was the simplicity and the dedicated purpose of the reMarkable2 that I find intriguing. When the reMarkable2 was announced, I decided to give it a try. When it arrived I have had enough distance to reconsider my consumerist decision and I thought about selling it right away. My decision of keeping it or at least giving it a try was made during a call with friend Norbert, who owns a reMarkable 2 for a few weeks and was positive about it. I will now try if for the 30 day no questions asked period and see how it works for me.

The order process

remarkable

  • 1x Marker Plus: €40
  • 1x Book Folio – Polymer weave – Gray: €20
  • 1x reMarkable 2: €399

Including taxes and shipping I ended up paying 459 EUR upfront. The delivery date was moved two times, but I finally received the device on the 19th of November. The reMarkable web site still offers the same pre-sale option, the marker is more expensive now. However, I think the price might change again when the product is rolled out globally.

Screenshot from the reMarkable web site, taken on 19th Nov 2020

Screenshot from the reMarkable web site, taken on 19th Nov 2020

Unboxing

The product is very nicely packaged. I received the reMarkable2 device, the marker and the protective cover (book folio) in separate cardboard boxes. There is a USB Type C charging cable and some extra pen tips included.

The little boxes.

The little boxes.

The tablet it self is super thin and looks very well made. It is also light, but it does not feel fragile.

Setup

The setup is simple. One needs to add the wifi details on the device and register at my.remarkable.com. After entering a one time code the registration is complete. The settings are straight forward and clear. Basically one only needs to enter if you are left or right handed and select the language you are writing in the the text recognition.

No overwhelming settings.

No overwhelming settings.

Usage

There is only one hardware button to turn the device on and off. You can enter a pin code to protect the device and its content from others. The display is very sharp and the reading experience is exceptional. Writing is also very nice, the response time is super fast, almost without delay. One can select different pens and markers and alter the colors between black, grey and white. My pen also came with a eraser function at the end of the pen. Below is an example how the exported PDF looks like.

My beautiful handwriting.

My beautiful handwriting.

The text recognition works well, one has to concentrate on a nicer hand writing though. The text above would be just gibberish. Also it does not detect the language automatically, but you have to select a global setting. The text conversion functionality is rather primitive, but should be sufficient to make the text searchable.

There is also a plugin for the Chrome browser, which allows you to send any web site via the reMarkable coud to your device. This works extremely well and makes up for the fact that there is no browser on the device itself. It comes of course with a big privacy downside, because reMarkable will know very well what you are reading. There is also an App for Android, whichallows you to view and upload documents from the phone. Sharing articles does not seem to be supported.

Here is an example of an "quality journalism" article that worked well (from www.derstandard.at).

Nicely rendered web page

Nicely rendered web page

Compared to other eBook readers it offers more reading real estate and it is much thinner.

From left to right: Tolino Shine, Amazon Kindle, reMarkable2

From left to right: Tolino Shine, Amazon Kindle, reMarkable2

Pros and Cons

I will evaluate the device within the next weeks and update the list of pros and cons as I use it. It seems to be a great device and the list below should not leave the impression that it is not, just because the list of negatives is longer. The last update was on the 20.11.2020.

Pros

  • Very good look and feel
  • The writing process is great
  • It does feel almost like paper
  • Someone more artistic might even be able to draw on it
  • Super lightweight

Cons

The folio is not for left handers

Yes, we are a minority, but while the device supports the left handed folks, the protective sleeve is not. I am used to this problem and would turn around a classic notepad if needed, but this does not work with the folio. You can only mount it on the left side and it does incommode me. I love its hipsteresque appeal, but I probably will not be able to use it.

Lefties will relate

Lefties will relate

No backlight

I love reading in bed, next to my special someone who requires more sleep than I do. The reMarkable2 has no backlight at all. It really is likea book, you cannot read in the dark without a light source. This seems to be the trade off for the slim form factor. Other devices like the Amazon Kindle or the Tolino Shine do have backlights, but are much thicker and heavier.

Private cloud

The device only supports the private cloud of reMarkable. You cannot sync your documents from other clouds.

No browser

Even a simple browser for selecting articles would be nice.

Sharing options

The Chrome desktop app is great, but the Android variant does not allow sharing articles. I do not understand this.

Organization

Shared web sites will show up under ebooks. You can create folders, but you cannot do so in the Web interface. In the Android app you can only rename files, but not move them. This means you have to fiddle with each and every file on the device.

Rendering

When sending a Web page to the device via the Chrome plugin, the rendering can be weird. This also can happen with rather simple pages, below is an example from the Hibernate documentation. I would have expected to get such a simple page without problems.

Not a nicely rendered web page

Not a nicely rendered web page

Not detected as MTP device under Linux

This could have been simpler, why can’t I just drag and drop my ebooks over? The reMarkable registers as a new networking device, when you enable the web interface, at least there is some non-cloudbased access possible.

Nov 20 08:45:32 xps kernel: [ 5662.967832] usb 1-6.1.1.2: new high-speed USB device number 25 using xhci_hcd
Nov 20 08:45:32 xps kernel: [ 5663.085530] usb 1-6.1.1.2: New USB device found, idVendor=04b3, idProduct=4010, bcdDevice= 4.14
Nov 20 08:45:32 xps kernel: [ 5663.085537] usb 1-6.1.1.2: New USB device strings: Mfr=1, Product=2, SerialNumber=0
Nov 20 08:45:32 xps kernel: [ 5663.085540] usb 1-6.1.1.2: Product: RNDIS/Ethernet Gadget
Nov 20 08:45:32 xps kernel: [ 5663.085544] usb 1-6.1.1.2: Manufacturer: Linux 4.14.78 with 30b20000.usb
Nov 20 08:45:32 xps kernel: [ 5663.089960] cdc_ether 1-6.1.1.2:1.0 usb0: register 'cdc_ether' at usb-0000:00:14.0-6.1.1.2, CDC Ethernet Device, d2:dc:08:36:a6:a9
Nov 20 08:45:32 xps mtp-probe: checking bus 1, device 25: "/sys/devices/pci0000:00/0000:00:14.0/usb1/1-6/1-6.1/1-6.1.1/1-6.1.1.2"
Nov 20 08:45:32 xps mtp-probe: bus: 1, device: 25 was not an MTP device

When you enable the web interface, browsing to http://10.11.99.1/ shows a primitive tool where you can upload and download the documents. You cannot create folders for organizing the books and papers to your liking using this interface. There is no authentication, so better not activate this in an untrusted wifi. The web interface only works when the USB cable is connected. Maybe this is the security concept.

Using the Papercolor Theme in Vim and Zsh

I still spend a large amount of time in the terminal, where I mainly work with Zsh, Terminator and Vim. Having a uniform look makes switching between the environments more smoothly. The theme can be easily installed for vim as descibed here, using your favorite plugin manager.

Plugin 'NLKNguyen/papercolor-theme'

How does it look in Vim?

Like this!

Vim with the Papercolor theme

Terminator

In terminator, you can simple create a new profile sporting the Papercolor theme colors by adding the following snippet to the configuration file /home/sproell/.config/terminator. The essential part is the palette with the different colors.

[[Papercolor]]
    background_color = "#eeeeee"
    cursor_color = "#aaaaaa"
    foreground_color = "#4d4d4c"
    palette = "#ededed:#d7005f:#718c00:#d75f00:#4271ae:#8959a8:#3e999f:#4d4d4c:#969694:#d7005f:#718c00:#d75f00:#4271ae:#8959a8:#34e2e2:#f5f5f5"

This will provide you with a Zsh like this:

Vim with the Papercolor theme

A small hint for working with Integers in MySQL

Integer types in MySQL

A small hint that is not obvious when working with Integers in MySQL. All Integer types in MySQL have a fixed size. This size determines the lower and upper bound of the integer the field can store. You can have signed and unsigned integers, which make this range larger (unsigned) or smaller, because you can also store negative Integers (signed). In either case, the number M in the definition var INT(M) does not limit the actual stored or storable length at all. It is only for display and to indicate developers what kind of length one would expect. It is not a technical constraint. Only the type is (tiniyint, smallint, bigint, etc).

An example using INT(1)

All integer types have a defined range, you can find it in the official documentation. The maximum value for signed integers is 2147483647 and for TINYINT it is 127. If we define an INT(1) the number in the brackets only show how many digits will be displayed when padded with 0.

DROP TABLE my_numbers;
CREATE TABLE my_numbers (
    integerNumber INT(1),
    tinyIntegerNumber TINYINT(1)
) ENGINE InnoDB;

# Max value for integers is 2147483647 and for TINYINT it is 127
INSERT INTO my_numbers(integerNumber, tinyIntegerNumber) VALUES (2147483647, 127);
SELECT * FROM my_numbers;
# Query OK, 1 row affected (0.005 sec)
#
# root@local> SELECT * FROM my_numbers;
# +---------------+-------------------+
# | integerNumber | tinyIntegerNumber |
# +---------------+-------------------+
# |    2147483647 |               127 |
# +---------------+-------------------+
# 1 row in set (0.000 sec)
#

As you can see the table stored both max values although we used INT(1). This exists only because if the database knows how long a number typically is, it can padd it with zeros (if it is told to do so).

Padding with zeros

In the following example, we will padd integers with zero values from the left. Then the setting does have an effect in some clients. The official and the MariaDB command line clients display the leading zeros. Other clients like IntelliJ do not display them.

DROP TABLE IF EXISTS  my_padded_numbers;
CREATE TABLE my_padded_numbers (
    integerNumber INT(64) ZEROFILL,
    tinyIntegerNumber TINYINT(8) ZEROFILL
) ENGINE InnoDB;
# Max value for integers is 2147483647 and for TINYINT it is 127
INSERT INTO my_padded_numbers(integerNumber, tinyIntegerNumber) VALUES (123, 42);
SELECT * FROM my_padded_numbers;
# root@local> INSERT INTO my_padded_numbers(integerNumber, tinyIntegerNumber) VALUES (123, 42);
# Query OK, 1 row affected (0.004 sec)
#
# root@local> SELECT * FROM my_padded_numbers;
# +------------------------------------------------------------------+-------------------+
# | integerNumber                                                    | tinyIntegerNumber |
# +------------------------------------------------------------------+-------------------+
# | 0000000000000000000000000000000000000000000000000000000000000123 |          00000042 |
# +------------------------------------------------------------------+-------------------+
# 1 row in set (0.001 sec)

Conclusion

The M value for declaring integers has nothing to do with the length. This is different when working with VARCHAR for instance, where the number in the brackets indeed defines the maximum capacity.

Automated encrypted incremental backups with restic to a Synology mount

Synology

The Synology devices are easy to setup but the backup applications are not that great. In addition, Synology does not offer SSH key based authentication for non-privileged accounts out of the box. After each upgrade the SSH key setup might be gone and it always feels like a hack.

The following text describes how to setup painless, automated, encrypted, incremental backups via CIFS using Restic. The script makes use of passwords stored in the gnome keyring and thus can mount a CIFS aka Samba share without having to store the Synology user password in the /etc/fstab file or in the clear in a local file.

Restic

To quote the restic web site

restic is a program that does backups right. The design goals are: Easy: Doing backups should be a frictionless process, otherwise you are tempted to skip it. Restic should be easy to configure and use, so that in the unlikely event of a data loss you can just restore it. Likewise, restoring data should not be complicated. Fast: Backing up your data with restic should only be limited by your network or hard disk bandwidth so that you can backup your files every day. Nobody does backups if it takes too much time. Restoring backups should only transfer data that is needed for the files that are to be restored, so that this process is also fast. Verifiable: Much more important than backup is restore, so restic enables you to easily verify that all data can be restored. Secure: Restic uses cryptography to guarantee confidentiality and integrity of your data. The location where the backup data is stored is assumed to be an untrusted environment (e.g. a shared space where others like system administrators are able to access your backups). Restic is built to secure your data against such attackers, by encrypting it with AES-256 in counter mode and authenticating it using Poly1305-AES. Efficient: With the growth of data, additional snapshots should only take the storage of the actual increment. Even more, duplicate data should be de-duplicated before it is actually written to the storage backend to save precious backup space. Free: restic is free software and licensed under the BSD 2-Clause License and actively developed on GitHub.

The pre-requisits

We need the following things beforewe can start the backup:

  • A Synology user account
  • Restic installed on the local machine
  • The username and password stored in the Gnome keyring using the secret-tool
  • An initialized restic repository
  • The repository password stored in the Gnome key ring

The prerequisits are rather easy to setup. We need to add the username and password once to the Gnome keyring. We can use a label and some attributes. The attributes are used for finding the username and password again in the key ring.

secret-tool store --label="Synology username" synology username

Our user should also be able to mount the remote folder. One wayt to achieve this is by allowing the user to use sudo without a password for the mount and unmount command. This can be done by adding the following line to the sudoers file:

stefan ALL=(ALL) NOPASSWD: /usr/bin/mount, /usr/bin/umount, /usr/bin/mkdir, /usr/bin/chown

The restic commands are pretty self explainatory.

restic -r ${SYNOLOGY_BACKUP_PATH} backup \
      ~/git \
      ~/.aliases \
      ~/.zshrc \
      ~/.bashrc \
      ~/.vimrc \
      ~/.ssh

In order to be mindfiul about the used storage, we will prune snapshots after a while.

 # Prune older snapshots but keep 2 pe day, 7 days and 4 weeks
    restic -r ${SYNOLOGY_BACKUP_PATH} \
      forget \
      --prune \
      --keep-hourly 2 \
      --keep-daily 7 \
      --keep-weekly 4

The automation part is done by a classic cron job which runs for instance every 4 hours.

The prerequisits are described in the comments of the script below.

The backup script

The following script will backup the folders and files listed in the backup command.

#!/usr/bin/env bash
# exit when any command fails
set -e

# The following script assumes that you initialized the directory $SYNOLOGY_BACKUP_PATH
# on the Synology as a new restic repository:
# restic -r ${SYNOLOGY_BACKUP_PATH} init
# Make sure to add the username and the password first to the key ring
# secret-tool store --label="Synology username" synology username
# secret-tool store --label="Synology password" synology password
# Also add the restic password to the keyring
# secret-tool store --label="Restic password" restic password
# Add the mount and umount command without sudo password
# by using sudo visudo and add this line (adapt username)
# sproell ALL=(ALL) NOPASSWD: /usr/bin/mount, /usr/bin/umount, /usr/bin/mkdir, /usr/bin/chown
# Add a cron job which runs every 4 hours (for example)
# MAILTO="stefan.proell@cropster.com"
# 0 */4 * * * DISPLAY=:0 /home/sproell/scripts/automatic-restic-backup.sh

NAS_IP=192.168.1.10

BACKUP_MOUNT_TARGET=/media/${USER}/cropster-synology-backup
SYNOLOGY_PATH=/home
SYNOLOGY_BACKUP_PATH="${BACKUP_MOUNT_TARGET}/2020-restic-backups"
LOCAL_USER_ID=$( id -u )
LOCAL_GROUP_ID=$( id -g )

isPathMounted() { findmnt -rno TARGET "$1" >/dev/null;} 

runResticBackup()
{
  # Store the repository password in the environment variable
  RESTIC_PASSWORD=$( secret-tool lookup restic password )
  export RESTIC_PASSWORD

  # Check of the remote directory exists, otherwise create it
  [ -d ${SYNOLOGY_BACKUP_PATH} ] ||  ( \
    sudo mkdir ${SYNOLOGY_BACKUP_PATH};\
    sudo chown -R ${LOCAL_USER_ID}:${LOCAL_GROUP_ID} \
    ${SYNOLOGY_BACKUP_PATH})

  restic -r ${SYNOLOGY_BACKUP_PATH} snapshots
  restic -r ${SYNOLOGY_BACKUP_PATH} backup \
    ~/git \
    ~/.aliases \
    ~/.zshrc \
    ~/.bashrc \
    ~/.vimrc \
    ~/.ssh
  # Prune older snapshots but keep 2 pe day, 7 days and 4 weeks
  restic -r ${SYNOLOGY_BACKUP_PATH} \
    forget \
    --prune \
    --keep-hourly 2 \
    --keep-daily 7 \
    --keep-weekly 4
  sudo umount ${BACKUP_MOUNT_TARGET}
  unset RESTIC_PASSWORD
  unset SYNO_PASS
}

echo "Current date: $( date )"
# Can I ping the nas?
echo "Pinging the NAS 10 times"
ping -c 10 $NAS_IP > /dev/null && echo "up... proceeding" || ( echo "NAS down.";exit 1)


# Get username and password from key chain
SYNO_USER=$( secret-tool lookup synology username )
SYNO_PASS=$( secret-tool lookup synology password )

if isPathMounted "${BACKUP_MOUNT_TARGET}";
  then 
    echo "${BACKUP_MOUNT_TARGET} is mounted"   
    
   else 
   # Check if the directory exists locally, otherwise create it
    [ -d ${BACKUP_MOUNT_TARGET} ] ||  sudo mkdir ${BACKUP_MOUNT_TARGET}
    sudo mount -t cifs \
      -o username=${SYNO_USER},password="${SYNO_PASS}",vers=2.0,uid=${LOCAL_USER_ID},gid=${LOCAL_GROUP_ID},noauto,user \
      "//192.168.1.10${SYNOLOGY_PATH}/" \
      "${BACKUP_MOUNT_TARGET}"
fi

runResticBackup

if isPathMounted "${BACKUP_MOUNT_TARGET}";
  then
    sudo umount ${BACKUP_MOUNT_TARGET}
fi

Connecting to MySQL 5.6 using TLSv1 on Ubuntu 20.04.1

Ubuntu 20.04 updated some security policies which has the effect that some connections might not work anymore. This happened for instance with connections to AWS Aurora 5.6 recently.

Background

AWS Aurora 5.6 only supports TLSv1 for encrypted connection.

This TLS protocol version is considered insecure for some time now and has been disabled in Ubuntu 20.04 and MySQL Client 8 respectively. If MySQL configuration permits TLSv1, TLSv1.1, and TLSv1.2, but your host system configuration permits only connections that use TLSv1.3 or higher, you cannot establish MySQL connections at all, because no protocol permitted by MySQL is permitted by the host system. The problem manifests itself in the following error:

ERROR 2026 (HY000): SSL connection error: error:1425F102:SSL routines:ssl_choose_client_version:unsupported protocol

You can see the supported version on the MySQL server using

SHOW GLOBAL VARIABLES LIKE 'tls_version';

Specifying the protocol version explicitly did not help:

mysql --defaults-extra-file=~/.aws_mysql_aurora --tls-version=TLSv1

The (bug report)[https://bugs.launchpad.net/ubuntu/+source/mysql-8.0/+bug/1872541/comments/27] is also reflected in the official docs, but siumply copying the suggested fix won’t do.

Example of the problem

The following python snippet throws a protocol error with Ubuntu 20.4.

import mysql.connector
from mysql.connector import ClientFlag

config = {
    'user': 'me',
    'password': 'secret',
    'host': '127.0.0.1',
    'port': '3306',
    'database': 'sakila',
    'raise_on_warnings': True,
    'client_flags': [ClientFlag.SSL]
}

cnx = mysql.connector.connect(**config)
cur = cnx.cursor(buffered=True)
cur.execute("SHOW STATUS LIKE 'Ssl_cipher'")
print(cur.fetchone())
cur.close()
cnx.close()

The error thrown by Python is

mysql.connector.errors.InterfaceError: 2055: Lost connection to MySQL server at '127.0.0.1:8001', system error: 1 [SSL: UNSUPPORTED_PROTOCOL] unsupported protocol (_ssl.c:1108)

The fix

It is possible to lower the TLS version requirements in the openssl config of Ubuntu. But in order for this to work with Aurora 5.6, you need to lower the TLS version to TLSv1. This can be achieved by adapting the OpenSSL settings in /etc/ssl/openssl.cnf.

First add a default section on top of the file:

openssl_conf = default_conf

and then at the end of the file add:

[ default_conf ]

ssl_conf = ssl_sect

[ssl_sect]

system_default = system_default_sect

[system_default_sect]
MinProtocol = TLSv1
MaxProtocol = None
CipherString = DEFAULT:@SECLEVEL=1

This lowers the allower TLS version tro TLSv1 again. Now trhe python script from above can be executed.

Workaround

If the hack above should not work, what will help is downgrading the MySQL Client to the Version 5.7. I downloaded the bundle from here and unpacked it. Then I installed the following packages:

sudo apt-get install libtinfo5 libaio1
sudo dpkg -i mysql-common_5.7.31-1ubuntu18.04_amd64.deb
sudo dpkg -i mysql-community-client_5.7.31-1ubuntu18.04_amd64.deb

Then I could connect again without any extra settings and flags.

Update 2020-10-14

The workaround stopped to function for some reason. I then found this trick described here which offers a temporary fix. It uses a local configuration file for openssl. This file can then be used for single commands by prefixing the variable. Save the configuration below in a file, for instance ~/.openssl_allow_tls1.0.cnf.

openssl_conf = openssl_init

[openssl_init]
ssl_conf = ssl_sect

[ssl_sect]
system_default = system_default_sect

[system_default_sect]
CipherString = DEFAULT@SECLEVEL=1

and then export this in a terminal export OPENSSL_CONF=~/configurations/ssl/openssl_allow_tls1.0.cnf.

Update 2020-10-19

If you need to update your system, make sure to hold the installed MySQL 5.7 version of the client. You could see which version would be installed if you would upgrade now:

$ apt-cache policy mysql-community-client mysql-common

mysql-community-client:
  Installed: 5.7.31-1ubuntu18.04
  Candidate: 8.0.22-1ubuntu20.04
  Version table:
     8.0.22-1ubuntu20.04 500
        500 http://repo.mysql.com/apt/ubuntu focal/mysql-8.0 amd64 Packages
 *** 5.7.31-1ubuntu18.04 100
        100 /var/lib/dpkg/status
mysql-common:
  Installed: 5.7.31-1ubuntu18.04
  Candidate: 8.0.22-1ubuntu20.04
  Version table:
     8.0.22-1ubuntu20.04 500
        500 http://repo.mysql.com/apt/ubuntu focal/mysql-8.0 amd64 Packages
     5.8+1.0.5ubuntu2 500
        500 http://at.archive.ubuntu.com/ubuntu focal/main amd64 Packages
        500 http://at.archive.ubuntu.com/ubuntu focal/main i386 Packages
 *** 5.7.31-1ubuntu18.04 100
        100 /var/lib/dpkg/status

Then you can pin the version to 5.7 using a wildcard *1 by adding the packages to /etc/apt/preferences.

Package: mysql-common
Pin: version 5.7.*
Pin-Priority: 1001

Package: mysql-community-client
Pin: version 5.7.*
Pin-Priority: 1001

Package: libmysqlclient21
Pin: version 5.7.*
Pin-Priority: 1001

Use an SSH tunnel ending at the host also within a Docker container

There are many use cases for SSH tunnels as they allow accessing resources behind firewalls and other obstacles over a secure channel. Since more and more services are containerized, it makes sense to use SSH tunnels also within the context of containers, especially for testing. Using SSH tunnels within Docker containers would require installing an SSH client and mounting keys. In many cases this is not possible without building a new Docker image which includes the client. As this is a cumbersome approach, an easy but insecure solution exists, which is recommended in many tutorials or posts on StackOverflow. This fix makes use of the --net=host flag, which allows accessing all ports of the host - also open SSH tunnels. But for obvious reasons, this is dangerous.

A better approach is to bind the SSH tunnel to the bridge network of the Docker service. This bridge is available to all the containers connected to the particular network and thus can also forward specific ports. This technique gives a much more fine granular control over which containers may access the tunnel.

You can list the bridges with ifconfig.

br-b273916af970: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 172.18.0.1  netmask 255.255.0.0  broadcast 172.18.255.255
        ether dd:bb:aa:cc:bb  txqueuelen 0  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 205  bytes 22043 (22.0 KB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

br-c92ab5650a7a: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
        inet 172.19.0.1  netmask 255.255.0.0  broadcast 172.19.255.255
        ether aa:bb:cc  txqueuelen 0  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

docker0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 172.17.0.1  netmask 255.255.0.0  broadcast 172.17.255.255
        ether bb:aa:cc:aa:bb:  txqueuelen 0  (Ethernet)
        RX packets 3919  bytes 227485 (227.4 KB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 3205  bytes 8586636 (8.5 MB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

You can find out the bridge a container uses with docker inspect «container». The default bridge is called docker0. You then need to enable packet forwarding to this bridge for IP tables. Note that the change below is not persisted, you need to do it again after reboot or add it permanently.

sudo iptables -I INPUT 3 -i docker0 -j ACCEPT

After this step you open a SSH tunnel on the host and also use it inside your container. This way you do not have to install SSH clients, keys etc. The trick is to bind the SSH connection to the right interface. Below you can see an example command, which allows to connect to a MySQL database via a bastion host, a typical scenario for cloud services.

ssh -L 172.17.0.1:7002:mysqlcluster:3306 bastion.example.org

Then you can access the forwarded port within the docker container on the same IP / interface, e.g. 172.17.0.1:7200 This way you can for instance use the Percona PMM container for momitoring your cluster also on your local machine, without having to deploy it and expose it via the Web.

Setup a SSH tunnel in IntelliJ

There seems to be a problem with the older MySQL driver and IntelliJ Ultimate. When I tried to connect to a remote MySQL instance using a tunnel, IntelliJ would refuse to connect. IntelliJ would show an error similar to the following:

Cannot connect to a database. Tried three times.

The linked online help page was not really helpful. I could create the tunnel manually and verify the settings, all seemed fine. As a next step I inspected the MySQL error log on the remote server and noticed that errors in relation to my connection attempts wer showing. Thus I knew at least that the connection issue was not caused by SSH, but rather seems to be related to MySQL.

I then upgraded the MySQL driver and the connection worked out of the box. I use a SSH config section similar to this:

Host remote-mysql-database-server
	User bob
	Hostname 10.20.30.40
	IdentityFile ~/.ssh/rsa_id

This can then be used in IntelliJ. For more complex setups, for instance when you need to connect to AWS RDS and have to use a bastion host, I found it easier to setup the connection details also in the SSH config file and keep the IntelliJ configuration simple.

IntelliJ MySQL settings

MySQL Driver

SSH settings

Demystifying MySQL User Management (Part 1)

Managing user accounts can be a bit confusing, when working with MySQL. The two main commands in that context: CREATE USER and GRANT. Obviously, CREATE user is for adding a new user while GRANT is used for assigning permissions.

Pre-requisits: Docker Compose

For testing access to a MySQL instance, we can use Docker for running said instance and two different client containers. The docker-compose snipped below orchestrates those three containers and puts them on a shared network.

version: '3'
services:
  mysql-db:
    image: mysql:8
    container_name: mysql-8-server
    restart: unless-stopped
    command: --default-authentication-plugin=mysql_native_password
    environment:
      MYSQL_DATABASE: 'world'
      MYSQL_ALLOW_EMPTY_PASSWORD: 'yes'      
    ports:
      - '3308:3306'
    expose:
      - '3306'      
    volumes:
      - my-data-volume:/var/lib/mysql
      - ./demo_data/world.sql:/docker-entrypoint-initdb.d/world.sql:ro
    networks:
      mysql-test-net:
        ipv4_address: 172.28.1.2
  mysql-client-1:
    image: alpine-mysql:latest
    container_name: client-1
    restart: unless-stopped
    command: tail -f /dev/null
    networks:
      mysql-test-net:
        ipv4_address: 172.28.1.3
  mysql-client-2:
      image: alpine-mysql:latest
      container_name: client-2
      restart: unless-stopped
      command: tail -f /dev/null
      networks:
        mysql-test-net:
          ipv4_address: 172.28.1.4
volumes:
  my-data-volume:
networks:
    mysql-test-net:
        ipam:
            driver: default
            config:
                - subnet: 172.28.0.0/16

The docker-compose script is pretty self explainatory. We use a MySQL 8 container, that allows connecting with the root user and an empty password. We persist the data of the MySQL database in a seperate data volume called my-data-volume and we also load the example database called world which we downloaded before.

In order to have a nice and simple image for the clients., we can use the Alpine image and pre-install the MySQL client inside it, by using the following Dockerfile.

FROM alpine:3.7
RUN apk add --no-cache mysql-client

You need to build it once on your host with the command

docker build --tag alpine-mysql:latest .

After starting docker-compose with docker-compose up, We end up with three containers:

  • mysql-8-server
  • client-1
  • client-2

MySQL User Accounts, Access and Permissions

In this blog post, I would like to highlight some issues that arise when you add accounts for users that can connect from different hosts. Our docker container accepts connections for root from any host. In the wild, this is of course bad practice and we are going to change that as soon as we have added a second account.

Connect to the MySQL server by starting the MySQL client inside the container client-1 and point it to the right host by providing the hostname of the database server. Docker resolves the container names for us nicely, because all three containers are inside the same network. In order to improve the readability, we also changed the prompt so that it is more obvious from which client we are connecting to.

$ docker exec -it client-1 mysql -uroot -h mysql-8-server --prompt "Client 1 >"
Welcome to the MariaDB monitor.  Commands end with ; or \g.
Your MySQL connection id is 9
Server version: 8.0.12 MySQL Community Server - GPL

Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

Client 1 >

The Root User

Lets make our example more secure by allowing root to connect only from the machine where the database instance is running at. Being still connected from client-1 as root, we can change the permissions. But lets have a look what users are there anyways.

Client 1 >SELECT User, Host, authentication_string  FROM mysql.user\G
*************************** 1. row ***************************
                 User: root
                 Host: %
authentication_string: 
*************************** 2. row ***************************
                 User: mysql.infoschema
                 Host: localhost
authentication_string: $A$005$THISISACOMBINATIONOFINVALIDSALTANDPASSWORDTHATMUSTNEVERBRBEUSED
*************************** 3. row ***************************
                 User: mysql.session
                 Host: localhost
authentication_string: $A$005$THISISACOMBINATIONOFINVALIDSALTANDPASSWORDTHATMUSTNEVERBRBEUSED
*************************** 4. row ***************************
                 User: mysql.sys
                 Host: localhost
authentication_string: $A$005$THISISACOMBINATIONOFINVALIDSALTANDPASSWORDTHATMUSTNEVERBRBEUSED
*************************** 5. row ***************************
                 User: root
                 Host: localhost
authentication_string: 
5 rows in set (0.00 sec)

As we can see there are five preinstalled accounts The users mysql.infoschema, mysql.session and mysql.sys are internal accounts that cannot be used for client connections. Do not delete them though, they are needed for internal tasks. Then there are two root accounts: ‘root’@’%’ and ‘root’@’localhost’. The first one is the problematic one, because with the wildcard ‘%’, clients can connect from any remote computer. We definetely do not want that.

So lets drop it:

DROP USER 'root'@'%';

When you now run the SELECT query again, you will notice that it still works. Even after you run FLUSH PRIVILEGES. The reason is that the server will verify the connection settings only after the next time the client connects. So lets exit the session and try to connect again.

$ docker exec -it client-1 mysql -uroot -h mysql-8-server --prompt "Client 1 >"
ERROR 1130 (HY000): Host '172.28.1.3' is not allowed to connect to this MySQL server

As expected, we cannot connect from a remote server anymore. Lets login using MySQL container.

docker exec -it mysql-8-server  mysql -uroot -h localhost --prompt "Server >"
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 11
Server version: 8.0.12 MySQL Community Server - GPL

Copyright (c) 2000, 2018, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

Server >

Note how we are now connecting to the mysql-8-server and using localhost as the hostname.

Localhost vs 127.0.0.1

In the example above we connected using localhost as the hostname. When we try the same with the IP address 127.0.0.1 of the loopback interface, which is often seen as synonymous, it does not work anymore.

docker exec -it mysql-8-server  mysql -uroot -h 127.0.0.1 --prompt "Server >"ERROR 1130 (HY000): Host '127.0.0.1' is not allowed to connect to this MySQL server

This is confusing, after all (after using apt-get install inetutils-ping for installing the tool inside the MySQL container), we can verify that localhost resolves to 127.0.0.1.

But there is more to that. First of all, MySQL can use hostname resolution and would make a DNS request to get the IP address from a hostname. As this can be pretty slow, it is turned off by default, as you can verify like this:

Server >show variables like 'skip_name_resolve';
+-------------------+-------+
| Variable_name     | Value |
+-------------------+-------+
| skip_name_resolve | ON    |
+-------------------+-------+
1 row in set (0.00 sec)

Further more, localhost is a special hostname for MySQL. You can connect to MySQL either by using a TCP connection or a file socket. The file socket only works on the same machine as the server is running and it is the faster method. This is why MySQL treats connections coming from localhost per default as socket connections. If you want to use 127.0.0.1, you can add this as a second host for the existing user using GRANT.

Server >GRANT ALL PRIVILEGES ON *.* TO 'root'@'127.0.0.1';
ERROR 1410 (42000): You are not allowed to create a user with GRANT

Now thats weird? After all, we are root and should be allowed to do anything, but we see this error message above. The reason is that there does not yet exist an account for ‘root’@’127.0.0.1’ yet.

Server >CREATE USER 'root'@'127.0.0.1' IDENTIFIED BY 'super-secret';
Query OK, 0 rows affected (0.07 sec)

Server >GRANT ALL PRIVILEGES ON *.* to 'root'@'127.0.0.1';
Query OK, 0 rows affected (0.07 sec)

Server >GRANT GRANT OPTION ON *.* to 'root'@'127.0.0.1';
Query OK, 0 rows affected (0.08 sec)

You always need to create a user account first before setting permissions with GRANT.

While we are at it, we can also set a password for root. Note how we set the password for both root accounts. Accounts are always identified with ‘username’@’hostname’.

Server >ALTER USER 'root'@'localhost' IDENTIFIED BY 'super-secret';
Server >ALTER USER 'root'@'127.0.0.1' IDENTIFIED BY 'super-secret';

Adding Users

Let’s logon with our new credentials again and create a new user called bob and add some permissions for the world database.

CREATE USER 'bob' IDENTIFIED BY 'secret';

What looks pretty harmless adds an account for the user called bob. Without adding a host, MySQL simply adds the wildcard for hosts and allows therefore connecting to the MySQL instance from any IP address. You can verify it with the follwing command that I am going to use more often in the course of this post.

Server >SHOW GRANTS FOR 'bob';
+---------------------------------+
| Grants for bob@%                |
+---------------------------------+
| GRANT USAGE ON *.* TO `bob`@`%` |
+---------------------------------+
1 row in set (0.00 sec)

Thats not what we had in mind. Lets delete bob’s account again and create it with a proper host.

DROP USER 'bob';
CREATE USER 'bob'@'172.28.1.3' IDENTIFIED BY 'secret';

Note how we limited the account to allow connections only from Client 1, by provoding its IP. You can’t connect from Client 2.

stefan@stefan-desktop:~/Docker-Projects/MySQL-Demo$ docker exec -it client-1 mysql -ubob -h mysql-8-server -psecret --prompt "Client 1 >"
Welcome to the MariaDB monitor.  Commands end with ; or \g.
Your MySQL connection id is 14
Server version: 8.0.12 MySQL Community Server - GPL

Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

Client 1 >exit
Bye
stefan@stefan-desktop:~/Docker-Projects/MySQL-Demo$ docker exec -it client-2 mysql -ubob -h mysql-8-server -psecret --prompt "Client 2 >"
ERROR 1045 (28000): Access denied for user 'bob'@'172.28.1.4' (using password: YES)

This are the very basics of MySQL account management. In the future I plan to go into more details.

Percona Xtrabackup – Some General Tips

When you stream Xtrabackup to agzipped tar file, it also contains the information about the binary log position where the server was at when the backup was started. This information is needed to start the lave at the same position again

time nice -n 15 xtrabackup --backup --slave-info --safe-slave-backup  --stream=tar | gzip -c > /var/backup/percona.dump.tar.gz</code>```


In order to get that position without extracting the archive file, you can use the following command

tar -xOf /var/backup/percona.dump.tar.gz xtrabackup_binlog_info```

You can list all files in the archive like this:

 tar -ztvf percona.dump.tar.gz </code>```


Note that both operations can take quite some time and resources.

Lightning fast integration tests with Docker, MySQL and tmpfs

Integration tests that involve database operations requite to tear down and re-initialize a database multiple times. Although most developer machines offer SSD and enough RAM, database initialization can consume a considerable amount of time nevertheless.

Docker allows defining volumes that are mounted directly into the memory by using tmpfs. We can utilize this feature to utilize the speed up disk operations as during database imports, by moving the data from the disk into the memory.

The following example measures the time for writing 1GB worth of data to an SSD:

dd if=/dev/zero of=/tmp/output bs=1024k count=1024; 
1024+0 Datensätze ein
1024+0 Datensätze aus
1073741824 bytes (1,1 GB, 1,0 GiB) copied, 2,22033 s, 484 MB/s</code>```


For comparison, the following steps create a RAM disk and write the data to memory.

$ sudo mount -o size=1G -t tmpfs none /tmp/tmpfs

$ dd if=/dev/zero of=/tmp/tmpfs/output bs=1024k count=1024; 1024+0 Datensätze ein 1024+0 Datensätze aus 1073741824 bytes (1,1 GB, 1,0 GiB) copied, 0,309017 s, 3,5 GB/s



As you can see writing 1GB to memory is 7x faster. With the following Docker run command, you can spin-up a default MySQL container, where the data directory resides in a tmpfs.

docker run -d
–rm
–name mysql-56-in-memory
-p 3307:3306
–tmpfs /var/lib/mysql:rw,noexec,nosuid,size=1024m
-e MYSQL_ALLOW_EMPTY_PASSWORD=TRUE
-e MYSQL_DATABASE=dev
mysql:5.6



The arguments of Docker run mean the following

  * &#8211;rm: delete the container once it was stopped
  * &#8211;name: a name for the container
  * -p: map the host&#8217;s port 3307 to the port 3306 inside the container. This allows to run multiple MySQL containers in parallel and connect to them from the host via the port specified
  * &#8211;tmpfs: This line mounts the internal directory of the container to a RAM disk. It should be writeable (rm). Noexec prevents the execution of binaries, nosuid prevents changing the permission flags and the size specifies the size occupied by the tmpfs partition in memory. Adapt this to your usecase. The minimum for MySQL is around 200MB. Add the space needed for your data, indices etc.
  * MYSQL\_ALLOW\_EMPTY_PASSWORD does what it implies
  * MYSQL_DATABASE defines the name of a database to be created

If you run this command you can connect to the container like this: _mysql -u root -h 127.0.0.1 -P 3307_

The container behaves like a normal MySQL database, unless the data is not persisted on a hard disk, but only stored in the ephemeral memory. If you stop the container, it will be removed by docker and if you reboot the machine the data will be gone. for obvious reasons this is only a good idea for test data that can be re-created at any time.

You can achieve the same also with Docker Compose if you would like to orchestrate multiple containers.

version: ‘3’ services: mysql-56-integration: container_name: mysql-56-in-memory restart: unless-stopped image: mysql:5.6 environment: - MYSQL_ALLOW_EMPTY_PASSWORD='true’ - MYSQL_HOST=’’ volumes: - data:/var/cache ports: - “3307:3306”

volumes: data: driver_opts: type: tmpfs device: tmpfs```

Measure Replication Delay with Percona Toolkit Heartbeat (pt-heartbeat)

Replication lag occurs when the slave cannot keep up with the changes being made on the master instance. The reasons for a lag can be network congestion, poor performance on the slave or service interruptions of the slave mechanism, leading to a large pile of work to be catched up by the slave. When you useSHOW SLAVE STATUS you can monitor the current delay, which is indicated in the Seconds_Behind_Master: 0column. Ideally, this value is zero, but the smallest granularity you get for this delay counter is one second.

Percona offers a tool that allows measuring the delay more accurately, by writing timestamps in the master database and calculate the difference when they arrive at the slave: [pt-heartbeat][1]

On the Master

Create a dedicated database schema called percona and add the following table.

CREATE TABLE IF NOT EXISTS `percona`.`heartbeat` (
    ts                    varchar(26) NOT NULL,
    server_id             int unsigned NOT NULL PRIMARY KEY,
    file                  varchar(255) DEFAULT NULL,    -- SHOW MASTER STATUS
    position              bigint unsigned DEFAULT NULL, -- SHOW MASTER STATUS
    relay_master_log_file varchar(255) DEFAULT NULL,    -- SHOW SLAVE STATUS
    exec_master_log_pos   bigint unsigned DEFAULT NULL  -- SHOW SLAVE STATUS
);```


Then you can launch the script on the master, where it will write new timestamps periodically as long as the script is running.

pt-heartbeat -D percona –update –utc h=127.0.0.1,u=root,p=SECRET,P=3306```

The -D flag specifies the database schema where the table for the heartbeat data is stored. The –update command is needed on the master for updating the table and the last part is the DSN, which specifies host address, user name and password. Very important is the UTC flag –utc, that ensures that the timestamps will be interpreted as UTC, regardless of the timezone setting.

On the Slave

Create a user for reading the replicated heartbeat table like this:

GRANT REPLICATION SLAVE,PROCESS,SELECT ON percona.* TO `percona_checksum`@'10.20.30.40' IDENTIFIED BY 'SECRET';```


Then you can run the script and point it to the slave. It will output precise delay counts in fractions of seconds

pt-heartbeat h=127.0.0.1,u=percona_checksum,p=SECRET,P=3306 -D percona –monitor –utc –master-server-id 1```

Notice the different DSN, the –monitor flag and the master-server id, which needs to be the one of your master of course. You need this because the tool supports hierarchies of masters and therefore you would need to know which one is to be considered.

Results

The results will look similar to this

0.09s [  0.00s,  0.00s,  0.00s ]
0.02s [  0.20s,  0.00s,  0.00s ]
0.09s [  0.00s,  0.00s,  0.00s ]
0.03s [  0.02s,  0.00s,  0.00s ]
0.09s [  0.01s,  0.00s,  0.00s ]
0.09s [  0.01s,  0.00s,  0.00s ]
0.09s [  0.01s,  0.00s,  0.00s ]
0.08s [  0.01s,  0.00s,  0.00s ]
0.08s [  0.01s,  0.00s,  0.00s ]
0.10s [  0.01s,  0.00s,  0.00s ]
0.12s [  0.02s,  0.00s,  0.00s ]
0.08s [  0.01s,  0.00s,  0.00s ]
0.11s [  0.02s,  0.00s,  0.00s ]
0.08s [  0.02s,  0.00s,  0.00s ]
0.09s [  0.02s,  0.00s,  0.00s ]
0.08s [  0.02s,  0.00s,  0.00s ]
0.08s [  0.03s,  0.01s,  0.00s ]```


The output is the current delay followed by moving averages over 1m, 5m and 15m, as you might know from your favorite OS already.

Have a look at the [official documentation][1], there is a lot of options available.





<div class="twttr_buttons">
  <div class="twttr_twitter">
    <a href="http://twitter.com/share?text=Measure+Replication+Delay+with+Percona+Toolkit+Heartbeat+%28pt-heartbeat%29" class="twitter-share-button" data-via="" data-hashtags=""  data-size="default" data-url="https://blog.stefanproell.at/2018/12/06/measure-replication-delay-with-percona-toolkit-heartbeat-pt-heartbeat/"  data-related="" target="_blank">Tweet</a>
  </div>
  
  <div class="twttr_followme">
    <a href="https://twitter.com/@stefanproell" class="twitter-follow-button" data-show-count="true" data-size="default"  data-show-screen-name="false"  target="_blank">Follow me</a>
  </div>
</div>

 [1]: https://www.percona.com/doc/percona-toolkit/LATEST/pt-heartbeat.html

Setup AWS MySQL 5.6 Aurora as a Slave for an external Master with SSL

Setting up Aurora as a slave for an external MySQL server that acts as the master is a bit tricky. Of course we want a secured connection. For this reason we need to create client certificates to be used by AWS RDS. The steps below should work for RDS as well.

Generate and Sign the Certificates

The process is actually simple, but AWS is picky how you generate the certificates. I was using a SHA flag that was accepted by a regular MySQL 5.6 instance, but caused a cryptic (pun intended) MySQL 2026 Generic SSL error and it was quite hard to find the source. Also note that you need to have different common names (CN) for all three certificate pairs. They do not necessarily need to fit the actual domain name, but they need to be different. 

First we need to create the certificate authority that can sign the keys

# Generate a certificate authority key pair
openssl genrsa 2048 > ca-key.pem
# Notice the CN name. It needs to be different for all of the three key pairs that we create!
openssl req -new -x509 -nodes -days 3600 -key ca-key.pem -out ca.pem -subj "/C=AT/ST=Tirol/L=Innsbruck/O=The Good Corp/OU=IT Department/CN=ca.mysql"

Then create the server key pair

#Generate a server key. Note again the different CN
openssl req -newkey rsa:2048 -days 3600 -nodes -keyout server-key.pem -out server-req.pem -subj "/C=AT/ST=Tirol/L=Innsbruck/O=The Good Corp/OU=IT Department/CN=server.mysql"
# Convert the format
openssl rsa -in server-key.pem -out server-key.pem
# Sign it
openssl x509 -req -in server-req.pem -days 3600 -CA ca.pem -CAkey ca-key.pem -set_serial 01 -out server-cert.pem

Finally we generate a client certificate and its key. You can repeat these steps to generate multiple certificates for clients

# Again, note the CN
openssl req -newkey rsa:2048 -days 3600 -nodes -keyout client-key.pem -out client-req.pem -subj "/C=AT/ST=Tirol/L=Innsbruck/O=The Good Corp/OU=IT Department/CN=client.mysql"
# Convert
openssl rsa -in client-key.pem -out client-key.pem
# Sign
openssl x509 -req -in client-req.pem -days 3600 -CA ca.pem -CAkey ca-key.pem -set_serial 01 -out client-cert.pem
# Verify
openssl verify -CAfile ca.pem server-cert.pem client-cert.pem

Now we have all the certs we need.

Master Setup

The setup is pretty standard. Add the server certificates to the MySQL configuration of your master and restart.

# SSL Server Certificate
ssl-ca=/etc/mysql/ssl/ca.pem
ssl-cert=/etc/mysql/ssl/server-cert.pem
ssl-key=/etc/mysql/ssl/server-key.pem

Then create a user for the slave

CREATE USER 'aws'@'%' IDENTIFIED BY 'SECRET';
GRANT REPLICATION CLIENT, REPLICATION SLAVE ON *.* TO 'aws'@'%' IDENTIFIED BY 'SECRET' REQUIRE SSL;```


# Slave Setup

On AWS you do not have SUPER() privileges, but can use stored procedures provided by Amazon to setup the slave.

Start fresh by removing old records. If there was no previous setup, there might be an error.

CALL mysql.rds_remove_binlog_ssl_material; CALL mysql.rds_reset_external_master;



Now you need to pass the client certificate data as a JSON to AWS Aurora.

CALL mysql.rds_import_binlog_ssl_material('{“ssl_ca”:"—–BEGIN CERTIFICATE—– MIIBAgMBVRpcm9sMRIw… … —–END CERTIFICATE—–\n”,“ssl_cert”:"—–BEGIN CERTIFICATE—– KAoIBAQCzn28awhyN8V56Z2bskCiMhJt4 … —–END CERTIFICATE—–\n”,“ssl_key”:"—–BEGIN RSA PRIVATE KEY—– SbeLNsRzrPoCVGGqwqR6gE6AZu … —–END RSA PRIVATE KEY—–"}');



A message that the SSL data was accepted will appear if you pasted the certificate, the key and the CA certificate correctly.

Finally, start the replication and check the status

CALL mysql.rds_start_replication; SHOW SLAVE STATUS\G```

Tests and Troubleshooting

On the master, you can check if the slave even tries to connect for instance with tcpdump. In the example below the IP 1.2.3.4 would be the AWS gateway address as seen by your firewall.

sudo tcpdump src host 1.2.3.4 -vv```




<div class="twttr_buttons">
  <div class="twttr_twitter">
    <a href="http://twitter.com/share?text=Setup+AWS+MySQL+5.6+Aurora+as+a+Slave+for+an+external+Master+with+SSL" class="twitter-share-button" data-via="" data-hashtags=""  data-size="default" data-url="https://blog.stefanproell.at/2018/11/30/setup-aws-mysql-5-6-aurora-as-a-slave-for-an-external-master-with-ssl/"  data-related="" target="_blank">Tweet</a>
  </div>
  
  <div class="twttr_followme">
    <a href="https://twitter.com/@stefanproell" class="twitter-follow-button" data-show-count="true" data-size="default"  data-show-screen-name="false"  target="_blank">Follow me</a>
  </div>
</div>

Install Innotop from Source

Innotop is a great tool but not included in the current Ubuntu repositories. Here is how you install it manually:

# Install perl database interface
sudo apt-get install libdbi-perl
# Install MySQL and Terminal perl modules
sudo cpan Term::ReadKey DBD::mysql
# Clone innotop
git clone https://github.com/innotop/innotop.git
# Enter directory
cd innotop
# Make
perl Makefile.PL
# Install
sudo make install

Then you can run innotop like this

innotop --user $ADMIN_USER --password $ADMIN_PASSWORD --host $HOST```




<div class="twttr_buttons">
  <div class="twttr_twitter">
    <a href="http://twitter.com/share?text=Install+Innotop+from+Source" class="twitter-share-button" data-via="" data-hashtags=""  data-size="default" data-url="https://blog.stefanproell.at/2018/11/07/install-innotop-from-source/"  data-related="" target="_blank">Tweet</a>
  </div>
  
  <div class="twttr_followme">
    <a href="https://twitter.com/@stefanproell" class="twitter-follow-button" data-show-count="true" data-size="default"  data-show-screen-name="false"  target="_blank">Follow me</a>
  </div>
</div>

Grafana and InfluxDB with SSL inside a Docker Container

Self-signed SSL certificates

On the host, create a directory for storing the self signed SSL certificates. This directory will be mounted in the Grafana container as well as in the InfluxDB container to /var/ssl. Create the self signed SSL certificates as follows:

mkdir -p /docker/ssl
cd /docker/ssl/
# Generate a private key
openssl genrsa -des3 -out server.key 1024
# Generate CSR
openssl req -new -key server.key -out server.csr
# Remove password
openssl rsa -in server.key -out server.key
# Generate self signed cert
openssl x509 -req -days 365 -in server.csr -signkey server.key -out server.crt
# Set permissions
sudo chmod 644 server.crt
sudo chmod 600 server.key

Next, create a config directory and create individual configuration files for Grafana and InfluxB: mkdir conf

Grafana

In the file ./conf/grafana/defaults.ini set the protocol to https and provide the paths to the mounted ssl directory in the container.

#################################### Server ##############################
[server]
# Protocol (http, https, socket)
protocol = https
...
...
# https certs & key file
cert_file = /var/ssl/server.crt
cert_key = /var/ssl/server.key
...```


## InfluxDB

The file ./conf/influxdb/influxdb.conf is also pretty simple. Add a [http] category and add the settings:

[meta] dir = “/var/lib/influxdb/meta” [data] dir = “/var/lib/influxdb/data” engine = “tsm1” wal-dir = “/var/lib/influxdb/wal” [http] https-enabled = true https-certificate =”/var/ssl/server.crt” https-private-key =”/var/ssl/server.key”



## Environment

You can set environment variables in <span class="lang:default decode:true crayon-inline ">env files</span> for the services.

### env.grafana

GF_INSTALL_PLUGINS=grafana-clock-panel,briangann-gauge-panel,natel-plotly-panel,grafana-simple-json-datasource



### env.influxdb

INFLUXDB_REPORTING_DISABLED=true INFLUXDB_DB= INFLUXDB_HTTP_AUTH_ENABLED=true INFLUXDB_ADMIN_USER=admin INFLUXDB_ADMIN_PASSWORD= INFLUXDB_USER= INFLUXDB_USER_PASSWORD= INFLUXDB_WRITE_USER= INFLUXDB_WRITE_USER_PASSWORD=



## Docker Compose

Now you can launch the service by using <span class="lang:default decode:true crayon-inline ">docker-compose up</span> for the following file. Note

version: ‘2’

services: influxdb: image: influxdb:latest container_name: influxdb ports: - “8083:8083” - “8086:8086” - “8090:8090” env_file: - ‘env.influxdb’ volumes: - data-influxdb:/var/lib/influxdb - /docker/ssl:/var/ssl - /docker/conf/influxdb/influxdb.conf:/etc/influxdb/influxdb.conf

grafana:
    image: grafana/grafana:latest
    container_name: grafana
    ports:
        - "3000:3000"
    links:
        - influxdb
    env_file:
        - 'env.grafana'
    volumes:
        - data-grafana:/var/lib/grafana
        - /docker/ssl:/var/ssl
        - /docker/conf/grafana/defaults.ini:/usr/share/grafana/conf/defaults.ini

volumes: data-influxdb: data-grafana:```

Lets Encrypt Setup

If you require valid certificates, you can also use certificates from lets encrypt.

First, create the certificates on the host:

certbot certonly --standalone --preferred-challenges http --renew-by-default -d iot.example.org```


Then use this docker-compose file.

version: ‘2’

services: influxdb: image: influxdb:latest container_name: influxdb ports: - “8083:8083” - “8086:8086” - “8090:8090” env_file: - ‘env.influxdb’ volumes: - data-influxdb:/var/lib/influxdb - /etc/letsencrypt/live/iot.example.org/fullchain.pem:/var/ssl/server.crt - /etc/letsencrypt/live/iot.example.org/privkey.pem:/var/ssl/server.key - /docker/conf/influxdb/influxdb.conf:/etc/influxdb/influxdb.conf

grafana:
    image: grafana/grafana:latest
    container_name: grafana
    ports:
        - "3000:3000"
    links:
        - influxdb
    env_file:
        - 'env.grafana'
    volumes:
        - data-grafana:/var/lib/grafana
        - /etc/letsencrypt/live/iot.example.org/fullchain.pem:/var/ssl/server.crt
        - /etc/letsencrypt/live/iot.example.org/privkey.pem:/var/ssl/server.key
        - /docker/conf/defaults.ini:/usr/share/grafana/conf/defaults.ini

volumes: data-influxdb: data-grafana:```

Compile Percona Query Playback

Install the prerequisits and clone the repository.

sudo apt-get install libtbb-dev libmysqlclient-dev libboost-program-options-dev libboost-thread-dev libboost-regex-dev libboost-system-dev libboost-chrono-dev pkg-config cmake  libssl-dev
git clone https://github.com/Percona-Lab/query-playback.git
cd query-playback/
mkdir build_dir
cmake -DCMAKE_BUILD_TYPE=RelWithDebInfo ..```


You might see this error

CMake Error: The following variables are used in this project, but they are set to NOTFOUND. Please set them or make sure they are set and tested correctly in the CMake files: MYSQL_LIB linked by target “mysql_client” in directory /home/sproell/git/query-playback/percona_playback/mysql_client

– Configuring incomplete, errors occurred! See also “/home/sproell/git/query-playback/build_dir/CMakeFiles/CMakeOutput.log”. See also “/home/sproell/git/query-playback/build_dir/CMakeFiles/CMakeError.log”.



I found this [issue on Github][1] and after editing the file <span class="lang:default decode:true crayon-inline ">CMakeLists.txt</span> (in the directory&nbsp;<span class="lang:default decode:true crayon-inline ">&nbsp;~/git/query-playback/percona_playback/mysql_client/CMakeLists.txt</span>&nbsp;) as suggested, the tool compiles. You need to replace&nbsp;<span class="lang:default decode:true crayon-inline">find_library(MYSQL_LIB &#8220;mysqlclient_r&#8221; PATH_SUFFIXES &#8220;mysql&#8221;)</span>&nbsp;with <span class="lang:default decode:true crayon-inline ">find_library(MYSQL_LIB &#8220;mysqlclient&#8221; PATH_SUFFIXES &#8220;mysql&#8221;)</span>&nbsp;(remove the _r suffix).

Then you can compile the project as [documented here][2].

~/git/query-playback/build_dir cd build_dir cmake -DCMAKE_BUILD_TYPE=RelWithDebInfo .. make sudo make install```