Boox Note 2 Extensive Review

Digital Writing Devices - Boox Note 2

Recently I bought a Remarkable 2 and was a bit disappointed. I then bought a Boox Note 2 for a very good price on Willhaben and am very happy with it, it is exactly what I was looking for. It is a versataile eBook reader with a good note taking functionality.

Boox Note 2

Boox Note 2

What it is

The Boox Note 2 is a 10.3 inch eInk tablet with a capacitive touch feature that you can use without a pen for navigating and it also supports Wacom stylus pens for exact drawing and writing.

Boox Note 2 booting

Boox Note 2 booting

The device has a USB-C port and a finger print sensor. Obviously there is also a power button on the top of the device. It supports OTG so you can connect peripherals too. It has wifi, comes with small speakers and a microphone and other features you would expect from a tablet.

In contrast to the ReMarkable, the Boox Note 2 comes with an Android 9 operating system. This means you can install any Android App which is compatible with the Android 9. The device has an active backlight, which means you can read in the dark. The light can be adjusted and you can also fine tune the color temperature of the light.

The killer feature to me is that I can install arbitrary apps. For instance it allows me to use Amazon’s Kindle App and other readers. I can also install Syncthing or share documents with Google Drive. You might not find the desired apps within the integrated app store. This app store has quite a limited selection. But when you browse Google’s Play store you can select the device after registering it with your Google account, and then install it remotely.

Boox Note 2 App overview

Boox Note 2 App overview

You cannot directly annotate documents you read within apps, but it is possible to annotate PDFs document, when you use the integrated PDFs. Out of the box the device supports the following formats: PDF, EPUB, TXT, DJVU, HTML, RTF, FB2, DOC, MOBI, CHM and others.

Boox Note 2 with cover

Boox Note 2 with cover

The Boox 2 offers a simple cloud sync service to backup the documents and to upload documents to the device. The note taking functionality is great, there is all you need and the lag is rather short when writing. There is even a split screen functionality, where you can use the tablet in landscape mode and open a document on one side and scribble into a notebook on the other side of the screen.

Boox Note 2 note overview

Boox Note 2 note overview

What it is not

The hardware it self does not feel very high quality. The menus are quite simple, the translations to English are not the best. Some menu items have a funny name. The sync function is only very basic. Exporting the notes requires to sync them first so that the device produces PDFs. Only then they can be exported manually.

Rename foreign keys in MySQL

Foreign keys are essential for maintaining the referential integrity in relational databases. Sometimes, their names change, for example when you use Percona’s pt-online-schema-change and suddenly all your foreign keys have an underscore prefix.

MySQL does not provide an option to rename foreign keys on the fly. Typically the foreign key needs to be dropped and re-created. Adding a foreign key can be expensive when the index on the column does not already exist. In the case of renaming a foreign key, the index on the key column does exist already, thus renaming an foreign key is a lot cheaper.

The text snippets below are from the the documentation.

MySQL supports to ADD and DROP a foreign key within one ALTER TABLE statement. This only works when you use the ALGORITHM=INPLACE option.

Adding and dropping a foreign key in the same ALTER TABLE statement is supported for ALTER TABLE ... ALGORITHM=INPLACE but not for ALTER TABLE ... ALGORITHM=COPY.

INPLACE makes the ALTER statements much faster, but there are additional constraints.

INPLACE: Operations avoid copying table data but may rebuild the table in place. An exclusive metadata lock on the table may be taken briefly during preparation and execution phases of the operation. Typically, concurrent DML is supported.

That is important because

ALTER TABLE operations that use the COPY algorithm wait for other operations that are modifying the table to complete. After alterations are applied to the table copy, data is copied over, the original table is deleted, and the table copy is renamed to the name of the original table. While the ALTER TABLE operation executes, the original table is readable by other sessions (with the exception noted shortly). Updates and writes to the table started after the ALTER TABLE operation begins are stalled until the new table is ready, then are automatically redirected to the new table. The temporary copy of the table is created in the database directory of the original table unless it is a RENAME TO operation that moves the table to a database that resides in a different directory.

The problem is that ALGORITHM=INPLACE only works when the foreign_key_checks are disabled. Only then we can we can use a non-locking online DDL change. The key to this is using LOCK=NONE as additional setting. Specifying the lock mode is in general good practice. If an operation cannot be done without a lock (LOCK=NONE) or only with an exlusive lock (LOCK=SHARED), then the ALTER statement would fail with an error. So you do not risk to lock the database when it could not be avoided.

Retrieving odd foreign key names

The following query retrievs all foreign keys with an underscore prefix and produces the ALTER table statements we desperately need for renaming the FKs. It retrieves only the foreign keys from the sakila database.

    CONSTRAINT_NAME old_name,
        WHEN CONSTRAINT_NAME REGEXP '^\_{1}[A-Za-z0-9]{2}.*$'
        WHEN CONSTRAINT_NAME REGEXP '^\_{2}[A-Za-z0-9]{2}.*$'
        END AS new_name,
            WHEN CONSTRAINT_NAME REGEXP '^\_{1}[A-Za-z0-9]{2}.*$'
            WHEN CONSTRAINT_NAME REGEXP '^\_{2}[A-Za-z0-9]{2}.*$'
           '), ALGORITHM=INPLACE, LOCK=NONE;') AS alter_statement_for_new_name,
           '), ALGORITHM=INPLACE, LOCK=NONE;') AS alter_statement_for_old_name,
           ', ALGORITHM=INPLACE, LOCK=NONE;') AS drop_statement,
            WHEN CONSTRAINT_NAME REGEXP '^\_{1}[A-Za-z0-9]{2}.*$'
            WHEN CONSTRAINT_NAME REGEXP '^\_{2}[A-Za-z0-9]{2}.*$'
           ', ALGORITHM=INPLACE, LOCK=NONE;') AS alter_statement_for_new_name_combined,
                WHEN CONSTRAINT_NAME REGEXP '^\_{1}[A-Za-z0-9]{2}.*$'
                WHEN CONSTRAINT_NAME REGEXP '^\_{2}[A-Za-z0-9]{2}.*$'
               END), ', ALGORITHM=INPLACE, LOCK=NONE;') AS alter_statement_for_old_name_combined

Changing the foreign keys on the fly

When you can ensure that no changes are happening to the database containing the foreign key you want to rename, the operation itself is only a metadata change and very fast. So it is worth shutting down the applications writing data, ensure that all connections are closed and then run the ALTER statements with the foreign_key_checks disabled.

The proper ALTER statement has the following structure

SET SESSION foreign_key_checks = 0;
ALTER TABLE sakila.Actor ADD CONSTRAINT fk_movie_actor FOREIGN KEY (movie_id) REFERENCES sakila.Movie(id), DROP FOREIGN KEY _fk_movie_actor, ALGORITHM=INPLACE, LOCK=NONE;
SET SESSION foreign_key_checks = 1;

OpenWrt: Extend an existing wifi with a guest wifi network

OpenWrt on a TP-Link RE450

Recently a Wifi repeater I setup years a go was reset by accident and I lost the configuration which was providing extended wifi coverage and guest wifi on a TP-Link RE450. Reconfiguring the setup cost me much more time than I am willing to admit, thus I wanted to share the basics in order to be prepared next time. Also I have a backup now.

The TP-Link RE450 v1.1 (Image from TP-Link)

The TP-Link RE450 v1.1 (Image from TP-Link)

The TP-Link 450 is a pretty nice and affordable device which has two different wifi bands (5Ghz, 2.4Ghz). You can get it for around 50 Euros and it comes with three antennas and an ethernet port. Also it can run LEDE / OpenWrt, an open source firmware for embedded systems, such as routers and wifi extenders. The history of LEDE and OpenWrt can be read here, in this article I will call the software OpenWrt altough the interface still shows LEDE. In this article I will refer to the RE 450 as router.

Connecting to the router

Since this router will be embedded into an existing computer network, it is essential to give it a unique IP in a unique subnet. I have decided to assign the IP address and the subnet to the router. The existing network is a wireless network called Wifi with the subnet Our plan is to add a guest wifi network with its own SSID called Guest and its own password. Clients from the guest network should not be able to access any other devices within the guest network and also not be able to access any clients from the existing network.

After installing the firmware, the router will have the default address In order to avvoid a clash with the existing Wifi network, I attached the router with an ethernet cable to the computer and disabled wifi on my computer during the setup. I then assigned a static IP address for my PC using the same subnet. And while I am already at it, I created a second ethernet profile using the address to switch to the desired subnet once the router is configured. Now you can easily switch between the subnets.

Installing the firmware

The first task is to get rid of the proprietary firmware and install OpenWrt. There are many instructions out there, it is important to verify the firmware and device version with great attention. Otherwise you might produce a 50 Euro paperweight of waste weekends (been there.) In case you have an older version installed, please consider updating.

For this article, I upgraded from LEDE 17.01 to a more recent OpenWRT release 19.07 and downloaded this particular release. For the initial setup, open up a browser and go to

Basic setup of the router

We now have installed OpenWrt on the router and can begin to configure it. You will be greated by the OpenWrt interface and advised to set a password.

The first login.

The first login.

We follow the advice and set a good password. Next we will set a static IP by going to Network > Interfaces and edit the existing LAN interface. This is obviously the ethernet connection we are connected to and we want to make sure we always find this device with the static ip in the standard subnet.

Assign a static IP

Assign a static IP

Always save and apply thanges. Afther this change you have to switch to the profile we created earlier so that you can access the router again. Now when we have logged in at with our new password, we should be greeted with the OpenWrt Luci Web interface.

Setup the interfaces and wifi networks

THe first step is to connect to the existing Wifi network, so that we have a working internet connection on the router for downloading updates. Go to Network > Wireless. You will see the default wireless interfaces called OpenWrt for both devices (3.4 Ghz and 5Ghz).

The default Wifi interfaces

The default Wifi interfaces

Remove them so that you reach a clean state.

No wireless interfaces, only devices

No wireless interfaces, only devices

Depending if your existing wireless network is on the 5Ghz band or the 2.4 Ghz band, use the appropriate device and click scan network and select the existing network. Obviously you need to be in range to see the network.

Join the existing Wifi network

Join the existing Wifi network

You will then be promped with a details page where you enter the passphrase for the existing network and where you can select the name of the new network. This is maybe a bit confusing, because this will create a new interface instead. Add the name wifi. On the tab firewall settings, add this network to the wan zone. This is the crucial step, because the existing wifi will act as the Internet uplink for the guest network.

The joined, existing wifi network

The joined, existing wifi network

Make sure to save and apply. You should then be able t ping any web page using Network > Diagnostics.

Testing ping

Testing ping

If this works it would be a perfect time to make a backup of the configuration.

Setup the Guest Wifi

The guest wifi also needs a new interface. Thus go to Network > Interfaces and click add new. Select static IP address and assign the name guestwifi. Leave interface unassigned for now.

Add the guest interface

Add the guest interface

On the next page, define a static address. This time we will use and the default subnet Also you should add a public DNS server like or

Set a static address.

Set a static address.

Then click on the firewall tab and create a new zone guestwifi.

Add a new firewall zone

Add a new firewall zone

Then click on the DHCP server tab and enable the service.

Enable DHCP service.

Enable DHCP service.

Review the settings and save the changes.

The DHCP settings can remain as they are.

The DHCP settings can remain as they are.

Every guest will then get an IP address from the subnet. Save and apply.

Then proceed to Network > Wireless again and create a new wireless guest network. I used the second antenna device to achieve this. Click on add and pick Access Point for the mode and give it a name, for instance Guest.

Add the guest network.

Add the guest network.

Then - and this is very important - go to the Wireless Security tab and pick WPA2 as the encryption standard and set a password you can share with your guests.

Set a password and enable encryption.

Set a password and enable encryption.

The last step is to enable client isolation in order to prevent that your guests try nasty things on each other. You find the setting at the advanced tab.

Enable client isolation

Enable client isolation

Now you should be able to connect to the Guest wifi and get an IP address assigned to your client. Bit it will be pretty boring because there is no internet yet.

Setup the firewall

The last step involves setting up the firewall. Go to Network > Firewall. First of all we need to ensure that we have different zones for lan, wan, wifi and guestwifi. The lan and wan zones are created by default. We have created the other two zones wifi and guestwifi.

The zone overview should look similar to this. We can see that the guestwifi zone can be forwarded to the wan zone. Also make sure that masquerading is enabled for the wan zone (it is per default).

Firewall zones

Firewall zones

The details of the guestwifi zone settings are shown below. Note the default reject settings for the INPUT and OUTPUT chain and that the only allowed zone to forward traffic to is the wan zone.

Guestwifi zone details

Guestwifi zone details

Now we have to setup three traffic rules in order to enable DHCP and DNS for the guests and to prevent them from accessing the other networks and the router web interface.

Enable DNS

Allow port 53 to be used.

Allow DNS

Allow DNS

Enable DHCP (ports 67 - 68)

Allow the UDP port range 67 to 68 for DHCP requests.

Guestwifi zone details

Guestwifi zone details

Block other networks

In order to separate the guest wifi from our regular wifi and the router, we block the entire subnets.

Guestwifi zone details

Guestwifi zone details


OpenWrt works very nice once the setup is clear. Some of the naming conventions are a bit confusing, but I guess this is normal given that it is a complex matter. This tutorial shows how to create a guest network on a device which is itself a client in an existing wifi network.

Remarkable 2 Extensive Review

A new writing and reading experience - Not there yet

In the following I am going to review the reMarkable2 e-ink tablet. I bought the device in the pre-sale and tested it extensively. While the writing experience is great, there are quite a few drawbacks. Unfortunately those drawbacks are outweighing the advantages by far for me. Although being very enthusiastic about the reMarkable2, I decided to send it back. At the end of this article you will find my thoughts.

On writing and thinking

After all this years working with computers professionally, I am still a fond lover of handwriting. I consider writing by hand a cultural technique worth preserving and I admire people who do have a beautiful style of writing. Unfortunately my handwriting is sloppy and hard to read for others and sometimes even for myself. Nevertheless, investing the time to write a thought down manually allows me to retain the information longer. Although I use a simple mark down based text file for keeping notes at work, I also use a classic notebook and Tombow caligraphy pens for jotting down ideas and tasks. Obviously making little sketches and drawings is also more convenient using a regular pen and paper approach.

Digitization of an old craft

A friend of mine is using a remarkable 1 and is quite happy with it. It seemed lightweight and I liked the idea of having digitized notes with a proper look and feel of paper. I also have seen other friends using a tablet like the iPad for taking handwritten notes with a special pen. Using a tablet with its glossy surface did not seem to appealing to me, although an Android tablet would of course offer a completely different set of possibilities. However I noticed that writing on a tablet seems to involve a lot of navigating and selecting functions before the actual writing takes place. Also it is of course a permanent distraction, the same as a regular laptop would be in a meeting. It was the simplicity and the dedicated purpose of the reMarkable2 that I find intriguing. When the reMarkable2 was announced, I decided to give it a try. When it arrived I have had enough distance to reconsider my consumerist decision and I thought about selling it right away. My decision of keeping it or at least giving it a try was made during a call with friend Norbert, who owns a reMarkable 2 for a few weeks and was positive about it. I will now try if for the 30 day no questions asked period and see how it works for me.

The order process


  • 1x Marker Plus: €40
  • 1x Book Folio – Polymer weave – Gray: €20
  • 1x reMarkable 2: €399

Including taxes and shipping I ended up paying 459 EUR upfront. The delivery date was moved two times, but I finally received the device on the 19th of November. The reMarkable web site still offers the same pre-sale option, the marker is more expensive now. However, I think the price might change again when the product is rolled out globally.

Screenshot from the reMarkable web site, taken on 19th Nov 2020

Screenshot from the reMarkable web site, taken on 19th Nov 2020


The product is very nicely packaged. I received the reMarkable2 device, the marker and the protective cover (book folio) in separate cardboard boxes. There is a USB Type C charging cable and some extra pen tips included.

The little boxes.

The little boxes.

The tablet it self is super thin and looks very well made. It is also light, but it does not feel fragile.


The setup is simple. One needs to add the wifi details on the device and register at After entering a one time code the registration is complete. The settings are straight forward and clear. Basically one only needs to enter if you are left or right handed and select the language you are writing in the the text recognition.

No overwhelming settings.

No overwhelming settings.


There is only one hardware button to turn the device on and off. You can enter a pin code to protect the device and its content from others. The display is very sharp and the reading experience is exceptional. Writing is also very nice, the response time is super fast, almost without delay. One can select different pens and markers and alter the colors between black, grey and white. My pen also came with a eraser function at the end of the pen. Below is an example how the exported PDF looks like.

My beautiful handwriting.

My beautiful handwriting.

The text recognition works well, one has to concentrate on a nicer hand writing though. The text above would be just gibberish. Also it does not detect the language automatically, but you have to select a global setting. The text conversion functionality is rather primitive, but should be sufficient to make the text searchable.

There is also a plugin for the Chrome browser, which allows you to send any web site via the reMarkable coud to your device. This works extremely well and makes up for the fact that there is no browser on the device itself. It comes of course with a big privacy downside, because reMarkable will know very well what you are reading. There is also an App for Android, whichallows you to view and upload documents from the phone. Sharing articles does not seem to be supported.

Here is an example of an "quality journalism" article that worked well (from

Nicely rendered web page

Nicely rendered web page

Compared to other eBook readers it offers more reading real estate and it is much thinner.

From left to right: Tolino Shine, Amazon Kindle, reMarkable2

From left to right: Tolino Shine, Amazon Kindle, reMarkable2

Conclusion: Sending it back

I have been using the reMarkable2 for almost two weeks every day. The device arrived with great timing, because I was having a lot of meetings recently. In total I have written more than 80 pages and the writing feel is exceptional and fun.

Collection of notes on the reMarkable2

Collection of notes on the reMarkable2

However there are too many little things that are not there yet. I do not intend this article to play down a device. I do not like slating reviews, especially from products from small companies and with a great vision. I repeat that the reMarkable2 is really great, but it does not fit into my workflow and there are too many little things which should have been fixed in a second revision.

The most annoying part is the syncing solution. This could be so much easier and I do not understand, why the device is so closed down. Of course, you could write that about many other devices too (looking at you Apple), but in this case, the scope of the application is so narrow. You can only use it for writing and reading. Why not make this as frictionless as possible. I would have expected that a single purpose device really does this one thing extremely well. I am not willing to take many compromises on a 460 EUR jotter.

Yes, there are independent projects on Github boosting and fixing things, but although I love tinkering, I do not want to fiddle with a writing device so much. I do not want to install scripts to keep some directories in sync (and I say this as a 15 years Linux user). I just want it to work.

In order to be able to find my notes again after some time, I would need something that goes beyond near miss OCR. Sure, my writing is not easy to read, but even when trying there are too many errors. Also the reMarkable2 can only recognize one language and very simple formating styles, such as lists. There is no search function. If you want to write something more than a shopping list, it will not be useable without properly digitizing the document.

Maybe there is a future iteration of the reMarkable2. I would totally give it another shot. But as it looks now, it is not what I expected: A replacement for a Moleskine notebook.

Pros and Cons

I will evaluate the device within the next weeks and update the list of pros and cons as I use it. It seems to be a great device and the list below should not leave the impression that it is not, just because the list of negatives is longer. The last update was on the 20.11.2020 04.12.2020.


  • Very good look and feel
  • The writing process is great
  • It does feel almost like paper
  • Someone more artistic might even be able to draw on it
  • Super lightweight


The folio is not for left handers

Yes, we are a minority, but while the device supports the left handed folks, the protective sleeve is not. I am used to this problem and would turn around a classic notepad if needed, but this does not work with the folio. You can only mount it on the left side and it does incommode me. I love its hipsteresque appeal, but I probably will not be able to use it.

Lefties will relate

Lefties will relate

No backlight

I love reading in bed, next to my special someone who requires more sleep than I do. The reMarkable2 has no backlight at all. It really is likea book, you cannot read in the dark without a light source. This seems to be the trade off for the slim form factor. Other devices like the Amazon Kindle or the Tolino Shine do have backlights, but are much thicker and heavier.

Private cloud

The device only supports the private cloud of reMarkable. You cannot sync your documents from other clouds. This is a big blocker, the benefit would be huge if one could just use Google Drive, Owncloud or whatever.

No browser

Even a simple browser for selecting articles would be nice.

Sharing options

The Chrome desktop app is great, but the Android variant does not allow sharing articles. I do not understand this, this would be extremely benefitial. Getting files on the device is in general cumbersome. It basically only works via the Chrome App or when connected via USB-C and when using the integrated Web server on the reMarkable2.


Shared web sites will show up under ebooks, there is no configuration. If you use the feature extensively, it will clutter immediately, since you do not have control over the file name or where it will be stored. You can create folders, but you cannot do so in the Web interface. In the Android app you can only rename files, but not move them. This means you have to fiddle with each and every file on the device.


When sending a Web page to the device via the Chrome plugin, the rendering can be weird. This also can happen with rather simple pages, below is an example from the Hibernate documentation. I would have expected to get such a simple page without problems.

Not a nicely rendered web page

Not a nicely rendered web page

Not detected as MTP device under Linux

This could have been simpler, why can’t I just drag and drop my ebooks over? The reMarkable registers as a new networking device, when you enable the web interface, at least there is some non-cloudbased access possible.

Nov 20 08:45:32 xps kernel: [ 5662.967832] usb 1- new high-speed USB device number 25 using xhci_hcd
Nov 20 08:45:32 xps kernel: [ 5663.085530] usb 1- New USB device found, idVendor=04b3, idProduct=4010, bcdDevice= 4.14
Nov 20 08:45:32 xps kernel: [ 5663.085537] usb 1- New USB device strings: Mfr=1, Product=2, SerialNumber=0
Nov 20 08:45:32 xps kernel: [ 5663.085540] usb 1- Product: RNDIS/Ethernet Gadget
Nov 20 08:45:32 xps kernel: [ 5663.085544] usb 1- Manufacturer: Linux 4.14.78 with 30b20000.usb
Nov 20 08:45:32 xps kernel: [ 5663.089960] cdc_ether 1- usb0: register 'cdc_ether' at usb-0000:00:14.0-, CDC Ethernet Device, d2:dc:08:36:a6:a9
Nov 20 08:45:32 xps mtp-probe: checking bus 1, device 25: "/sys/devices/pci0000:00/0000:00:14.0/usb1/1-6/1-6.1/1-6.1.1/1-"
Nov 20 08:45:32 xps mtp-probe: bus: 1, device: 25 was not an MTP device

When you enable the web interface, browsing to shows a primitive tool where you can upload and download the documents. You cannot create folders for organizing the books and papers to your liking using this interface. There is no authentication, so better not activate this in an untrusted wifi. The web interface only works when the USB cable is connected. Maybe this is the security concept.

Inconsistent UX

On the one hand, the reMarkable cannot be used without the pen. Yes, you can navigate and enter the settings, but it is impossible to mark anything or mark something quickly just by using your finger. On the other hand (pun intended) you cannot use hand written text for naming files or any other input. You have to use the virtual keyboard and this again works best with the pen.

The search only works for filenames, regardless of the file type. You cannot even search epub documents. This is very poor and when you use the handwritten notes, you cannot find anything inside the documents. With this limitation you can also take a picture of your traditional notebook and send it to yourself.

No tags or bookmarks

You have no possibility marking where in a document you stopped reading. The only alternative is to make a handwritten annotation, but that basically is like making a digital dog-ear. Also you cannot mark multiple pages and quickly jump to them. All you can do is marking an entire document as favorite. Because the file organization is so bad, you will end up with a lot of favorites, rendering the feature useless again.

Adding custom tags to documents would be super useful, yet there is no such option.

Charging cable position

The charging cable is at the bottom. If find this not ideal, because when put on a desk like a regular notebook, the cable is incomodating.

Send via email

The send via email functionality cannot customize the send email address. It will always send the notes from If you want to send your notes to a friend or colleague directly, it very likely will end up in spam.

Using the Papercolor Theme in Vim and Zsh

I still spend a large amount of time in the terminal, where I mainly work with Zsh, Terminator and Vim. Having a uniform look makes switching between the environments more smoothly. The theme can be easily installed for vim as descibed here, using your favorite plugin manager.

Plugin 'NLKNguyen/papercolor-theme'

How does it look in Vim?

Like this!

Vim with the Papercolor theme


In terminator, you can simple create a new profile sporting the Papercolor theme colors by adding the following snippet to the configuration file /home/sproell/.config/terminator. The essential part is the palette with the different colors.

    background_color = "#eeeeee"
    cursor_color = "#aaaaaa"
    foreground_color = "#4d4d4c"
    palette = "#ededed:#d7005f:#718c00:#d75f00:#4271ae:#8959a8:#3e999f:#4d4d4c:#969694:#d7005f:#718c00:#d75f00:#4271ae:#8959a8:#34e2e2:#f5f5f5"

This will provide you with a Zsh like this:

Vim with the Papercolor theme

A small hint for working with Integers in MySQL

Integer types in MySQL

A small hint that is not obvious when working with Integers in MySQL. All Integer types in MySQL have a fixed size. This size determines the lower and upper bound of the integer the field can store. You can have signed and unsigned integers, which make this range larger (unsigned) or smaller, because you can also store negative Integers (signed). In either case, the number M in the definition var INT(M) does not limit the actual stored or storable length at all. It is only for display and to indicate developers what kind of length one would expect. It is not a technical constraint. Only the type is (tiniyint, smallint, bigint, etc).

An example using INT(1)

All integer types have a defined range, you can find it in the official documentation. The maximum value for signed integers is 2147483647 and for TINYINT it is 127. If we define an INT(1) the number in the brackets only show how many digits will be displayed when padded with 0.

DROP TABLE my_numbers;
CREATE TABLE my_numbers (
    integerNumber INT(1),
    tinyIntegerNumber TINYINT(1)

# Max value for integers is 2147483647 and for TINYINT it is 127
INSERT INTO my_numbers(integerNumber, tinyIntegerNumber) VALUES (2147483647, 127);
SELECT * FROM my_numbers;
# Query OK, 1 row affected (0.005 sec)
# root@local> SELECT * FROM my_numbers;
# +---------------+-------------------+
# | integerNumber | tinyIntegerNumber |
# +---------------+-------------------+
# |    2147483647 |               127 |
# +---------------+-------------------+
# 1 row in set (0.000 sec)

As you can see the table stored both max values although we used INT(1). This exists only because if the database knows how long a number typically is, it can padd it with zeros (if it is told to do so).

Padding with zeros

In the following example, we will padd integers with zero values from the left. Then the setting does have an effect in some clients. The official and the MariaDB command line clients display the leading zeros. Other clients like IntelliJ do not display them.

DROP TABLE IF EXISTS  my_padded_numbers;
CREATE TABLE my_padded_numbers (
    integerNumber INT(64) ZEROFILL,
    tinyIntegerNumber TINYINT(8) ZEROFILL
# Max value for integers is 2147483647 and for TINYINT it is 127
INSERT INTO my_padded_numbers(integerNumber, tinyIntegerNumber) VALUES (123, 42);
SELECT * FROM my_padded_numbers;
# root@local> INSERT INTO my_padded_numbers(integerNumber, tinyIntegerNumber) VALUES (123, 42);
# Query OK, 1 row affected (0.004 sec)
# root@local> SELECT * FROM my_padded_numbers;
# +------------------------------------------------------------------+-------------------+
# | integerNumber                                                    | tinyIntegerNumber |
# +------------------------------------------------------------------+-------------------+
# | 0000000000000000000000000000000000000000000000000000000000000123 |          00000042 |
# +------------------------------------------------------------------+-------------------+
# 1 row in set (0.001 sec)


The M value for declaring integers has nothing to do with the length. This is different when working with VARCHAR for instance, where the number in the brackets indeed defines the maximum capacity.

Automated encrypted incremental backups with restic to a Synology mount


The Synology devices are easy to setup but the backup applications are not that great. In addition, Synology does not offer SSH key based authentication for non-privileged accounts out of the box. After each upgrade the SSH key setup might be gone and it always feels like a hack.

The following text describes how to setup painless, automated, encrypted, incremental backups via CIFS using Restic. The script makes use of passwords stored in the gnome keyring and thus can mount a CIFS aka Samba share without having to store the Synology user password in the /etc/fstab file or in the clear in a local file.


To quote the restic web site

restic is a program that does backups right. The design goals are: Easy: Doing backups should be a frictionless process, otherwise you are tempted to skip it. Restic should be easy to configure and use, so that in the unlikely event of a data loss you can just restore it. Likewise, restoring data should not be complicated. Fast: Backing up your data with restic should only be limited by your network or hard disk bandwidth so that you can backup your files every day. Nobody does backups if it takes too much time. Restoring backups should only transfer data that is needed for the files that are to be restored, so that this process is also fast. Verifiable: Much more important than backup is restore, so restic enables you to easily verify that all data can be restored. Secure: Restic uses cryptography to guarantee confidentiality and integrity of your data. The location where the backup data is stored is assumed to be an untrusted environment (e.g. a shared space where others like system administrators are able to access your backups). Restic is built to secure your data against such attackers, by encrypting it with AES-256 in counter mode and authenticating it using Poly1305-AES. Efficient: With the growth of data, additional snapshots should only take the storage of the actual increment. Even more, duplicate data should be de-duplicated before it is actually written to the storage backend to save precious backup space. Free: restic is free software and licensed under the BSD 2-Clause License and actively developed on GitHub.

The pre-requisits

We need the following things beforewe can start the backup:

  • A Synology user account
  • Restic installed on the local machine
  • The username and password stored in the Gnome keyring using the secret-tool
  • An initialized restic repository
  • The repository password stored in the Gnome key ring

The prerequisits are rather easy to setup. We need to add the username and password once to the Gnome keyring. We can use a label and some attributes. The attributes are used for finding the username and password again in the key ring.

secret-tool store --label="Synology username" synology username

Our user should also be able to mount the remote folder. One wayt to achieve this is by allowing the user to use sudo without a password for the mount and unmount command. This can be done by adding the following line to the sudoers file:

stefan ALL=(ALL) NOPASSWD: /usr/bin/mount, /usr/bin/umount, /usr/bin/mkdir, /usr/bin/chown

The restic commands are pretty self explainatory.

restic -r ${SYNOLOGY_BACKUP_PATH} backup \
      ~/git \
      ~/.aliases \
      ~/.zshrc \
      ~/.bashrc \
      ~/.vimrc \

In order to be mindfiul about the used storage, we will prune snapshots after a while.

 # Prune older snapshots but keep 2 pe day, 7 days and 4 weeks
    restic -r ${SYNOLOGY_BACKUP_PATH} \
      forget \
      --prune \
      --keep-hourly 2 \
      --keep-daily 7 \
      --keep-weekly 4

The automation part is done by a classic cron job which runs for instance every 4 hours.

The prerequisits are described in the comments of the script below.

The backup script

The following script will backup the folders and files listed in the backup command.

#!/usr/bin/env bash
# exit when any command fails
set -e

# The following script assumes that you initialized the directory $SYNOLOGY_BACKUP_PATH
# on the Synology as a new restic repository:
# restic -r ${SYNOLOGY_BACKUP_PATH} init
# Make sure to add the username and the password first to the key ring
# secret-tool store --label="Synology username" synology username
# secret-tool store --label="Synology password" synology password
# Also add the restic password to the keyring
# secret-tool store --label="Restic password" restic password
# Add the mount and umount command without sudo password
# by using sudo visudo and add this line (adapt username)
# sproell ALL=(ALL) NOPASSWD: /usr/bin/mount, /usr/bin/umount, /usr/bin/mkdir, /usr/bin/chown
# Add a cron job which runs every 4 hours (for example)
# 0 */4 * * * DISPLAY=:0 /home/sproell/scripts/


LOCAL_USER_ID=$( id -u )
LOCAL_GROUP_ID=$( id -g )

isPathMounted() { findmnt -rno TARGET "$1" >/dev/null;} 

  # Store the repository password in the environment variable
  RESTIC_PASSWORD=$( secret-tool lookup restic password )

  # Check of the remote directory exists, otherwise create it
  [ -d ${SYNOLOGY_BACKUP_PATH} ] ||  ( \
    sudo mkdir ${SYNOLOGY_BACKUP_PATH};\
    sudo chown -R ${LOCAL_USER_ID}:${LOCAL_GROUP_ID} \

  restic -r ${SYNOLOGY_BACKUP_PATH} snapshots
  restic -r ${SYNOLOGY_BACKUP_PATH} backup \
    ~/git \
    ~/.aliases \
    ~/.zshrc \
    ~/.bashrc \
    ~/.vimrc \
  # Prune older snapshots but keep 2 pe day, 7 days and 4 weeks
  restic -r ${SYNOLOGY_BACKUP_PATH} \
    forget \
    --prune \
    --keep-hourly 2 \
    --keep-daily 7 \
    --keep-weekly 4
  sudo umount ${BACKUP_MOUNT_TARGET}
  unset SYNO_PASS

echo "Current date: $( date )"
# Can I ping the nas?
echo "Pinging the NAS 10 times"
ping -c 10 $NAS_IP > /dev/null && echo "up... proceeding" || ( echo "NAS down.";exit 1)

# Get username and password from key chain
SYNO_USER=$( secret-tool lookup synology username )
SYNO_PASS=$( secret-tool lookup synology password )

if isPathMounted "${BACKUP_MOUNT_TARGET}";
    echo "${BACKUP_MOUNT_TARGET} is mounted"   
   # Check if the directory exists locally, otherwise create it
    [ -d ${BACKUP_MOUNT_TARGET} ] ||  sudo mkdir ${BACKUP_MOUNT_TARGET}
    sudo mount -t cifs \
      -o username=${SYNO_USER},password="${SYNO_PASS}",vers=2.0,uid=${LOCAL_USER_ID},gid=${LOCAL_GROUP_ID},noauto,user \
      "//${SYNOLOGY_PATH}/" \


if isPathMounted "${BACKUP_MOUNT_TARGET}";
    sudo umount ${BACKUP_MOUNT_TARGET}

Connecting to MySQL 5.6 using TLSv1 on Ubuntu 20.04.1

Ubuntu 20.04 updated some security policies which has the effect that some connections might not work anymore. This happened for instance with connections to AWS Aurora 5.6 recently.


AWS Aurora 5.6 only supports TLSv1 for encrypted connection.

This TLS protocol version is considered insecure for some time now and has been disabled in Ubuntu 20.04 and MySQL Client 8 respectively. If MySQL configuration permits TLSv1, TLSv1.1, and TLSv1.2, but your host system configuration permits only connections that use TLSv1.3 or higher, you cannot establish MySQL connections at all, because no protocol permitted by MySQL is permitted by the host system. The problem manifests itself in the following error:

ERROR 2026 (HY000): SSL connection error: error:1425F102:SSL routines:ssl_choose_client_version:unsupported protocol

You can see the supported version on the MySQL server using


Specifying the protocol version explicitly did not help:

mysql --defaults-extra-file=~/.aws_mysql_aurora --tls-version=TLSv1

The (bug report)[] is also reflected in the official docs, but siumply copying the suggested fix won’t do.

Example of the problem

The following python snippet throws a protocol error with Ubuntu 20.4.

import mysql.connector
from mysql.connector import ClientFlag

config = {
    'user': 'me',
    'password': 'secret',
    'host': '',
    'port': '3306',
    'database': 'sakila',
    'raise_on_warnings': True,
    'client_flags': [ClientFlag.SSL]

cnx = mysql.connector.connect(**config)
cur = cnx.cursor(buffered=True)
cur.execute("SHOW STATUS LIKE 'Ssl_cipher'")

The error thrown by Python is

mysql.connector.errors.InterfaceError: 2055: Lost connection to MySQL server at '', system error: 1 [SSL: UNSUPPORTED_PROTOCOL] unsupported protocol (_ssl.c:1108)

The quick fix

It is possible to lower the TLS version requirements in the openssl config of Ubuntu. But in order for this to work with Aurora 5.6, you need to lower the TLS version to TLSv1. This can be achieved by adapting the OpenSSL settings in /etc/ssl/openssl.cnf.

First add a default section on top of the file:

openssl_conf = default_conf

and then at the end of the file add:

[ default_conf ]

ssl_conf = ssl_sect


system_default = system_default_sect

MinProtocol = TLSv1
MaxProtocol = None
CipherString = DEFAULT:@SECLEVEL=1

This lowers the allower TLS version tro TLSv1 again. Now the python script from above can be executed.

Proper fix

The solution above can also be used by applying the SSL configuration only to the current script and not the whole operating system. This is of course the wiser plan and should therefore be used. In order to use TLSv1 with Python you can

  1. Create a virtual environment with proper versions for the relevant packages
  2. Load the openssl configuration from above as an environment file

Requirements for Python

The following dependencies can be defined in a requirements.txt file.


Virtual Environment

You can also use the following snippet for a Makefile to create the virtual environment. My colleague Jonas suggested the following code:

venv: requirements.txt
	test -d venv || python3 -m venv venv
	venv/bin/pip3 install --upgrade pip setuptools
	venv/bin/pip3 install -Ur requirements.txt
	touch venv/bin/activate

Environment Variables in the Terminal

In order to connect you need to set the following environment variables. Make sure to use a full path for the openssl.cfg file. You can write those variables into a file called .venv and then source it: source .venv. Note that this is obviously sensitive data.

export OPENSSL_CONF=/full/path/to/config/openssl.cfg
export DB_HOST=
export DB_PORT=3306
export DB_USER=alice
export DB_NAME=sakila

Environment Variables in IntelliJ

The same method also works when you set the environment variables in the run / debug configuration of IntelliJ. You need to make sure that you use the right venv as interpreted for the project.

  1. Create a new virtual environment venv using make venv
  2. Set this environment as the interpreter of this project: File –> Project Structure –> Project SDK
  3. Create a new run / debug configuration and add the environment variables from above
  4. Make sure the run configuration uses the SDK

Python Example

Then you can use the following snippet.

import mysql.connector
import sqlalchemy as sqlalchemy
from mysql.connector.constants import ClientFlag
import pandas as pd

import logging
    format='%(asctime)s %(levelname)-8s %(message)s',
    datefmt='%Y-%m-%d %H:%M:%S')

sql_query = """
ORDER BY actor_id DESC

def get_connection_config():

    :return: db_config_dict
    if(os.getenv('DB_PASSWORD') != None):
        mysql_config = {
            'host': os.getenv('DB_HOST'),
            'port': os.getenv('DB_PORT'),
            'user': os.getenv('DB_USER'),
            'password': os.getenv('DB_PASSWORD'),
            'database': os.getenv('DB_NAME'),
            'client_flags': [ClientFlag.SSL]
        return mysql_config
        print("You need to set the env variables")

if __name__ == "__main__":
    mysql_config = get_connection_config()

    """Use a cursor object
    You can retrieve data by using a cursor object and iterate over the results.
    Close cursors and connections when done.

    mysql_connection = mysql.connector.connect(**mysql_config)

    cursor = mysql_connection.cursor()

    for (_username) in cursor:"Actor: {}".format(last_name))


    """Use Pandas for retrieving data
    The more convenient way of retrieving data is to use Pandas.
    It will return a data frame and you can easily paginate large result sets in a loop.
    mysql_connection = mysql.connector.connect(**mysql_config)
    for chunk in pd.read_sql_query(con=mysql_connection, sql=sql_query, chunksize = 5):"last_name: {}".format(chunk['last_name']))


You can find the code also at my Github repository.


If the hack above should not work, what will help is downgrading the MySQL Client to the Version 5.7. I downloaded the bundle from here and unpacked it. Then I installed the following packages:

sudo apt-get install libtinfo5 libaio1
sudo dpkg -i mysql-common_5.7.31-1ubuntu18.04_amd64.deb
sudo dpkg -i mysql-community-client_5.7.31-1ubuntu18.04_amd64.deb

Then I could connect again without any extra settings and flags.

Update 2020-10-14

The workaround stopped to function for some reason. I then found this trick described here which offers a temporary fix. It uses a local configuration file for openssl. This file can then be used for single commands by prefixing the variable. Save the configuration below in a file, for instance ~/.openssl_allow_tls1.0.cnf.

openssl_conf = openssl_init

ssl_conf = ssl_sect

system_default = system_default_sect


and then export this in a terminal export OPENSSL_CONF=~/configurations/ssl/openssl_allow_tls1.0.cnf.

Update 2020-10-19

If you need to update your system, make sure to hold the installed MySQL 5.7 version of the client. You could see which version would be installed if you would upgrade now:

$ apt-cache policy mysql-community-client mysql-common

  Installed: 5.7.31-1ubuntu18.04
  Candidate: 8.0.22-1ubuntu20.04
  Version table:
     8.0.22-1ubuntu20.04 500
        500 focal/mysql-8.0 amd64 Packages
 *** 5.7.31-1ubuntu18.04 100
        100 /var/lib/dpkg/status
  Installed: 5.7.31-1ubuntu18.04
  Candidate: 8.0.22-1ubuntu20.04
  Version table:
     8.0.22-1ubuntu20.04 500
        500 focal/mysql-8.0 amd64 Packages
     5.8+1.0.5ubuntu2 500
        500 focal/main amd64 Packages
        500 focal/main i386 Packages
 *** 5.7.31-1ubuntu18.04 100
        100 /var/lib/dpkg/status

Then you can pin the version to 5.7 using a wildcard *1 by adding the packages to /etc/apt/preferences.

Package: mysql-common
Pin: version 5.7.*
Pin-Priority: 1001

Package: mysql-community-client
Pin: version 5.7.*
Pin-Priority: 1001

Package: libmysqlclient21
Pin: version 5.7.*
Pin-Priority: 1001

Use an SSH tunnel ending at the host also within a Docker container

There are many use cases for SSH tunnels as they allow accessing resources behind firewalls and other obstacles over a secure channel. Since more and more services are containerized, it makes sense to use SSH tunnels also within the context of containers, especially for testing. Using SSH tunnels within Docker containers would require installing an SSH client and mounting keys. In many cases this is not possible without building a new Docker image which includes the client. As this is a cumbersome approach, an easy but insecure solution exists, which is recommended in many tutorials or posts on StackOverflow. This fix makes use of the --net=host flag, which allows accessing all ports of the host - also open SSH tunnels. But for obvious reasons, this is dangerous.

A better approach is to bind the SSH tunnel to the bridge network of the Docker service. This bridge is available to all the containers connected to the particular network and thus can also forward specific ports. This technique gives a much more fine granular control over which containers may access the tunnel.

You can list the bridges with ifconfig.

br-b273916af970: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet  netmask  broadcast
        ether dd:bb:aa:cc:bb  txqueuelen 0  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 205  bytes 22043 (22.0 KB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

br-c92ab5650a7a: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
        inet  netmask  broadcast
        ether aa:bb:cc  txqueuelen 0  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

docker0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet  netmask  broadcast
        ether bb:aa:cc:aa:bb:  txqueuelen 0  (Ethernet)
        RX packets 3919  bytes 227485 (227.4 KB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 3205  bytes 8586636 (8.5 MB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

You can find out the bridge a container uses with docker inspect «container». The default bridge is called docker0. You then need to enable packet forwarding to this bridge for IP tables. Note that the change below is not persisted, you need to do it again after reboot or add it permanently.

sudo iptables -I INPUT 3 -i docker0 -j ACCEPT

After this step you open a SSH tunnel on the host and also use it inside your container. This way you do not have to install SSH clients, keys etc. The trick is to bind the SSH connection to the right interface. Below you can see an example command, which allows to connect to a MySQL database via a bastion host, a typical scenario for cloud services.

ssh -L

Then you can access the forwarded port within the docker container on the same IP / interface, e.g. This way you can for instance use the Percona PMM container for momitoring your cluster also on your local machine, without having to deploy it and expose it via the Web.

Setup a SSH tunnel in IntelliJ

There seems to be a problem with the older MySQL driver and IntelliJ Ultimate. When I tried to connect to a remote MySQL instance using a tunnel, IntelliJ would refuse to connect. IntelliJ would show an error similar to the following:

Cannot connect to a database. Tried three times.

The linked online help page was not really helpful. I could create the tunnel manually and verify the settings, all seemed fine. As a next step I inspected the MySQL error log on the remote server and noticed that errors in relation to my connection attempts wer showing. Thus I knew at least that the connection issue was not caused by SSH, but rather seems to be related to MySQL.

I then upgraded the MySQL driver and the connection worked out of the box. I use a SSH config section similar to this:

Host remote-mysql-database-server
	User bob
	IdentityFile ~/.ssh/rsa_id

This can then be used in IntelliJ. For more complex setups, for instance when you need to connect to AWS RDS and have to use a bastion host, I found it easier to setup the connection details also in the SSH config file and keep the IntelliJ configuration simple.

IntelliJ MySQL settings

MySQL Driver

SSH settings

Demystifying MySQL User Management (Part 1)

Managing user accounts can be a bit confusing, when working with MySQL. The two main commands in that context: CREATE USER and GRANT. Obviously, CREATE user is for adding a new user while GRANT is used for assigning permissions.

Pre-requisits: Docker Compose

For testing access to a MySQL instance, we can use Docker for running said instance and two different client containers. The docker-compose snipped below orchestrates those three containers and puts them on a shared network.

version: '3'
    image: mysql:8
    container_name: mysql-8-server
    restart: unless-stopped
    command: --default-authentication-plugin=mysql_native_password
      MYSQL_DATABASE: 'world'
      - '3308:3306'
      - '3306'      
      - my-data-volume:/var/lib/mysql
      - ./demo_data/world.sql:/docker-entrypoint-initdb.d/world.sql:ro
    image: alpine-mysql:latest
    container_name: client-1
    restart: unless-stopped
    command: tail -f /dev/null
      image: alpine-mysql:latest
      container_name: client-2
      restart: unless-stopped
      command: tail -f /dev/null
            driver: default
                - subnet:

The docker-compose script is pretty self explainatory. We use a MySQL 8 container, that allows connecting with the root user and an empty password. We persist the data of the MySQL database in a seperate data volume called my-data-volume and we also load the example database called world which we downloaded before.

In order to have a nice and simple image for the clients., we can use the Alpine image and pre-install the MySQL client inside it, by using the following Dockerfile.

FROM alpine:3.7
RUN apk add --no-cache mysql-client

You need to build it once on your host with the command

docker build --tag alpine-mysql:latest .

After starting docker-compose with docker-compose up, We end up with three containers:

  • mysql-8-server
  • client-1
  • client-2

MySQL User Accounts, Access and Permissions

In this blog post, I would like to highlight some issues that arise when you add accounts for users that can connect from different hosts. Our docker container accepts connections for root from any host. In the wild, this is of course bad practice and we are going to change that as soon as we have added a second account.

Connect to the MySQL server by starting the MySQL client inside the container client-1 and point it to the right host by providing the hostname of the database server. Docker resolves the container names for us nicely, because all three containers are inside the same network. In order to improve the readability, we also changed the prompt so that it is more obvious from which client we are connecting to.

$ docker exec -it client-1 mysql -uroot -h mysql-8-server --prompt "Client 1 >"
Welcome to the MariaDB monitor.  Commands end with ; or \g.
Your MySQL connection id is 9
Server version: 8.0.12 MySQL Community Server - GPL

Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

Client 1 >

The Root User

Lets make our example more secure by allowing root to connect only from the machine where the database instance is running at. Being still connected from client-1 as root, we can change the permissions. But lets have a look what users are there anyways.

Client 1 >SELECT User, Host, authentication_string  FROM mysql.user\G
*************************** 1. row ***************************
                 User: root
                 Host: %
*************************** 2. row ***************************
                 User: mysql.infoschema
                 Host: localhost
*************************** 3. row ***************************
                 User: mysql.session
                 Host: localhost
*************************** 4. row ***************************
                 User: mysql.sys
                 Host: localhost
*************************** 5. row ***************************
                 User: root
                 Host: localhost
5 rows in set (0.00 sec)

As we can see there are five preinstalled accounts The users mysql.infoschema, mysql.session and mysql.sys are internal accounts that cannot be used for client connections. Do not delete them though, they are needed for internal tasks. Then there are two root accounts: ‘root’@’%’ and ‘root’@’localhost’. The first one is the problematic one, because with the wildcard ‘%’, clients can connect from any remote computer. We definetely do not want that.

So lets drop it:

DROP USER 'root'@'%';

When you now run the SELECT query again, you will notice that it still works. Even after you run FLUSH PRIVILEGES. The reason is that the server will verify the connection settings only after the next time the client connects. So lets exit the session and try to connect again.

$ docker exec -it client-1 mysql -uroot -h mysql-8-server --prompt "Client 1 >"
ERROR 1130 (HY000): Host '' is not allowed to connect to this MySQL server

As expected, we cannot connect from a remote server anymore. Lets login using MySQL container.

docker exec -it mysql-8-server  mysql -uroot -h localhost --prompt "Server >"
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 11
Server version: 8.0.12 MySQL Community Server - GPL

Copyright (c) 2000, 2018, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

Server >

Note how we are now connecting to the mysql-8-server and using localhost as the hostname.

Localhost vs

In the example above we connected using localhost as the hostname. When we try the same with the IP address of the loopback interface, which is often seen as synonymous, it does not work anymore.

docker exec -it mysql-8-server  mysql -uroot -h --prompt "Server >"ERROR 1130 (HY000): Host '' is not allowed to connect to this MySQL server

This is confusing, after all (after using apt-get install inetutils-ping for installing the tool inside the MySQL container), we can verify that localhost resolves to

But there is more to that. First of all, MySQL can use hostname resolution and would make a DNS request to get the IP address from a hostname. As this can be pretty slow, it is turned off by default, as you can verify like this:

Server >show variables like 'skip_name_resolve';
| Variable_name     | Value |
| skip_name_resolve | ON    |
1 row in set (0.00 sec)

Further more, localhost is a special hostname for MySQL. You can connect to MySQL either by using a TCP connection or a file socket. The file socket only works on the same machine as the server is running and it is the faster method. This is why MySQL treats connections coming from localhost per default as socket connections. If you want to use, you can add this as a second host for the existing user using GRANT.

Server >GRANT ALL PRIVILEGES ON *.* TO 'root'@'';
ERROR 1410 (42000): You are not allowed to create a user with GRANT

Now thats weird? After all, we are root and should be allowed to do anything, but we see this error message above. The reason is that there does not yet exist an account for ‘root’@’’ yet.

Server >CREATE USER 'root'@'' IDENTIFIED BY 'super-secret';
Query OK, 0 rows affected (0.07 sec)

Server >GRANT ALL PRIVILEGES ON *.* to 'root'@'';
Query OK, 0 rows affected (0.07 sec)

Server >GRANT GRANT OPTION ON *.* to 'root'@'';
Query OK, 0 rows affected (0.08 sec)

You always need to create a user account first before setting permissions with GRANT.

While we are at it, we can also set a password for root. Note how we set the password for both root accounts. Accounts are always identified with ‘username’@’hostname’.

Server >ALTER USER 'root'@'localhost' IDENTIFIED BY 'super-secret';
Server >ALTER USER 'root'@'' IDENTIFIED BY 'super-secret';

Adding Users

Let’s logon with our new credentials again and create a new user called bob and add some permissions for the world database.


What looks pretty harmless adds an account for the user called bob. Without adding a host, MySQL simply adds the wildcard for hosts and allows therefore connecting to the MySQL instance from any IP address. You can verify it with the follwing command that I am going to use more often in the course of this post.

Server >SHOW GRANTS FOR 'bob';
| Grants for bob@%                |
| GRANT USAGE ON *.* TO `bob`@`%` |
1 row in set (0.00 sec)

Thats not what we had in mind. Lets delete bob’s account again and create it with a proper host.

DROP USER 'bob';
CREATE USER 'bob'@'' IDENTIFIED BY 'secret';

Note how we limited the account to allow connections only from Client 1, by provoding its IP. You can’t connect from Client 2.

stefan@stefan-desktop:~/Docker-Projects/MySQL-Demo$ docker exec -it client-1 mysql -ubob -h mysql-8-server -psecret --prompt "Client 1 >"
Welcome to the MariaDB monitor.  Commands end with ; or \g.
Your MySQL connection id is 14
Server version: 8.0.12 MySQL Community Server - GPL

Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

Client 1 >exit
stefan@stefan-desktop:~/Docker-Projects/MySQL-Demo$ docker exec -it client-2 mysql -ubob -h mysql-8-server -psecret --prompt "Client 2 >"
ERROR 1045 (28000): Access denied for user 'bob'@'' (using password: YES)

This are the very basics of MySQL account management. In the future I plan to go into more details.

Percona Xtrabackup – Some General Tips

When you stream Xtrabackup to agzipped tar file, it also contains the information about the binary log position where the server was at when the backup was started. This information is needed to start the lave at the same position again

time nice -n 15 xtrabackup --backup --slave-info --safe-slave-backup  --stream=tar | gzip -c > /var/backup/percona.dump.tar.gz</code>```

In order to get that position without extracting the archive file, you can use the following command

tar -xOf /var/backup/percona.dump.tar.gz xtrabackup_binlog_info```

You can list all files in the archive like this:

 tar -ztvf percona.dump.tar.gz </code>```

Note that both operations can take quite some time and resources.

Lightning fast integration tests with Docker, MySQL and tmpfs

Integration tests that involve database operations requite to tear down and re-initialize a database multiple times. Although most developer machines offer SSD and enough RAM, database initialization can consume a considerable amount of time nevertheless.

Docker allows defining volumes that are mounted directly into the memory by using tmpfs. We can utilize this feature to utilize the speed up disk operations as during database imports, by moving the data from the disk into the memory.

The following example measures the time for writing 1GB worth of data to an SSD:

dd if=/dev/zero of=/tmp/output bs=1024k count=1024; 
1024+0 Datensätze ein
1024+0 Datensätze aus
1073741824 bytes (1,1 GB, 1,0 GiB) copied, 2,22033 s, 484 MB/s</code>```

For comparison, the following steps create a RAM disk and write the data to memory.

$ sudo mount -o size=1G -t tmpfs none /tmp/tmpfs

$ dd if=/dev/zero of=/tmp/tmpfs/output bs=1024k count=1024; 1024+0 Datensätze ein 1024+0 Datensätze aus 1073741824 bytes (1,1 GB, 1,0 GiB) copied, 0,309017 s, 3,5 GB/s

As you can see writing 1GB to memory is 7x faster. With the following Docker run command, you can spin-up a default MySQL container, where the data directory resides in a tmpfs.

docker run -d
–name mysql-56-in-memory
-p 3307:3306
–tmpfs /var/lib/mysql:rw,noexec,nosuid,size=1024m

The arguments of Docker run mean the following

  * &#8211;rm: delete the container once it was stopped
  * &#8211;name: a name for the container
  * -p: map the host&#8217;s port 3307 to the port 3306 inside the container. This allows to run multiple MySQL containers in parallel and connect to them from the host via the port specified
  * &#8211;tmpfs: This line mounts the internal directory of the container to a RAM disk. It should be writeable (rm). Noexec prevents the execution of binaries, nosuid prevents changing the permission flags and the size specifies the size occupied by the tmpfs partition in memory. Adapt this to your usecase. The minimum for MySQL is around 200MB. Add the space needed for your data, indices etc.
  * MYSQL\_ALLOW\_EMPTY_PASSWORD does what it implies
  * MYSQL_DATABASE defines the name of a database to be created

If you run this command you can connect to the container like this: _mysql -u root -h -P 3307_

The container behaves like a normal MySQL database, unless the data is not persisted on a hard disk, but only stored in the ephemeral memory. If you stop the container, it will be removed by docker and if you reboot the machine the data will be gone. for obvious reasons this is only a good idea for test data that can be re-created at any time.

You can achieve the same also with Docker Compose if you would like to orchestrate multiple containers.

version: ‘3’ services: mysql-56-integration: container_name: mysql-56-in-memory restart: unless-stopped image: mysql:5.6 environment: - MYSQL_ALLOW_EMPTY_PASSWORD='true’ - MYSQL_HOST=’’ volumes: - data:/var/cache ports: - “3307:3306”

volumes: data: driver_opts: type: tmpfs device: tmpfs```

Measure Replication Delay with Percona Toolkit Heartbeat (pt-heartbeat)

Replication lag occurs when the slave cannot keep up with the changes being made on the master instance. The reasons for a lag can be network congestion, poor performance on the slave or service interruptions of the slave mechanism, leading to a large pile of work to be catched up by the slave. When you useSHOW SLAVE STATUS you can monitor the current delay, which is indicated in the Seconds_Behind_Master: 0column. Ideally, this value is zero, but the smallest granularity you get for this delay counter is one second.

Percona offers a tool that allows measuring the delay more accurately, by writing timestamps in the master database and calculate the difference when they arrive at the slave: [pt-heartbeat][1]

On the Master

Create a dedicated database schema called percona and add the following table.

CREATE TABLE IF NOT EXISTS `percona`.`heartbeat` (
    ts                    varchar(26) NOT NULL,
    server_id             int unsigned NOT NULL PRIMARY KEY,
    file                  varchar(255) DEFAULT NULL,    -- SHOW MASTER STATUS
    position              bigint unsigned DEFAULT NULL, -- SHOW MASTER STATUS
    relay_master_log_file varchar(255) DEFAULT NULL,    -- SHOW SLAVE STATUS
    exec_master_log_pos   bigint unsigned DEFAULT NULL  -- SHOW SLAVE STATUS

Then you can launch the script on the master, where it will write new timestamps periodically as long as the script is running.

pt-heartbeat -D percona –update –utc h=,u=root,p=SECRET,P=3306```

The -D flag specifies the database schema where the table for the heartbeat data is stored. The –update command is needed on the master for updating the table and the last part is the DSN, which specifies host address, user name and password. Very important is the UTC flag –utc, that ensures that the timestamps will be interpreted as UTC, regardless of the timezone setting.

On the Slave

Create a user for reading the replicated heartbeat table like this:


Then you can run the script and point it to the slave. It will output precise delay counts in fractions of seconds

pt-heartbeat h=,u=percona_checksum,p=SECRET,P=3306 -D percona –monitor –utc –master-server-id 1```

Notice the different DSN, the –monitor flag and the master-server id, which needs to be the one of your master of course. You need this because the tool supports hierarchies of masters and therefore you would need to know which one is to be considered.


The results will look similar to this

0.09s [  0.00s,  0.00s,  0.00s ]
0.02s [  0.20s,  0.00s,  0.00s ]
0.09s [  0.00s,  0.00s,  0.00s ]
0.03s [  0.02s,  0.00s,  0.00s ]
0.09s [  0.01s,  0.00s,  0.00s ]
0.09s [  0.01s,  0.00s,  0.00s ]
0.09s [  0.01s,  0.00s,  0.00s ]
0.08s [  0.01s,  0.00s,  0.00s ]
0.08s [  0.01s,  0.00s,  0.00s ]
0.10s [  0.01s,  0.00s,  0.00s ]
0.12s [  0.02s,  0.00s,  0.00s ]
0.08s [  0.01s,  0.00s,  0.00s ]
0.11s [  0.02s,  0.00s,  0.00s ]
0.08s [  0.02s,  0.00s,  0.00s ]
0.09s [  0.02s,  0.00s,  0.00s ]
0.08s [  0.02s,  0.00s,  0.00s ]
0.08s [  0.03s,  0.01s,  0.00s ]```

The output is the current delay followed by moving averages over 1m, 5m and 15m, as you might know from your favorite OS already.

Have a look at the [official documentation][1], there is a lot of options available.

<div class="twttr_buttons">
  <div class="twttr_twitter">
    <a href="" class="twitter-share-button" data-via="" data-hashtags=""  data-size="default" data-url=""  data-related="" target="_blank">Tweet</a>
  <div class="twttr_followme">
    <a href="" class="twitter-follow-button" data-show-count="true" data-size="default"  data-show-screen-name="false"  target="_blank">Follow me</a>


Setup AWS MySQL 5.6 Aurora as a Slave for an external Master with SSL

Setting up Aurora as a slave for an external MySQL server that acts as the master is a bit tricky. Of course we want a secured connection. For this reason we need to create client certificates to be used by AWS RDS. The steps below should work for RDS as well.

Generate and Sign the Certificates

The process is actually simple, but AWS is picky how you generate the certificates. I was using a SHA flag that was accepted by a regular MySQL 5.6 instance, but caused a cryptic (pun intended) MySQL 2026 Generic SSL error and it was quite hard to find the source. Also note that you need to have different common names (CN) for all three certificate pairs. They do not necessarily need to fit the actual domain name, but they need to be different. 

First we need to create the certificate authority that can sign the keys

# Generate a certificate authority key pair
openssl genrsa 2048 > ca-key.pem
# Notice the CN name. It needs to be different for all of the three key pairs that we create!
openssl req -new -x509 -nodes -days 3600 -key ca-key.pem -out ca.pem -subj "/C=AT/ST=Tirol/L=Innsbruck/O=The Good Corp/OU=IT Department/CN=ca.mysql"

Then create the server key pair

#Generate a server key. Note again the different CN
openssl req -newkey rsa:2048 -days 3600 -nodes -keyout server-key.pem -out server-req.pem -subj "/C=AT/ST=Tirol/L=Innsbruck/O=The Good Corp/OU=IT Department/CN=server.mysql"
# Convert the format
openssl rsa -in server-key.pem -out server-key.pem
# Sign it
openssl x509 -req -in server-req.pem -days 3600 -CA ca.pem -CAkey ca-key.pem -set_serial 01 -out server-cert.pem

Finally we generate a client certificate and its key. You can repeat these steps to generate multiple certificates for clients

# Again, note the CN
openssl req -newkey rsa:2048 -days 3600 -nodes -keyout client-key.pem -out client-req.pem -subj "/C=AT/ST=Tirol/L=Innsbruck/O=The Good Corp/OU=IT Department/CN=client.mysql"
# Convert
openssl rsa -in client-key.pem -out client-key.pem
# Sign
openssl x509 -req -in client-req.pem -days 3600 -CA ca.pem -CAkey ca-key.pem -set_serial 01 -out client-cert.pem
# Verify
openssl verify -CAfile ca.pem server-cert.pem client-cert.pem

Now we have all the certs we need.

Master Setup

The setup is pretty standard. Add the server certificates to the MySQL configuration of your master and restart.

# SSL Server Certificate

Then create a user for the slave


# Slave Setup

On AWS you do not have SUPER() privileges, but can use stored procedures provided by Amazon to setup the slave.

Start fresh by removing old records. If there was no previous setup, there might be an error.

CALL mysql.rds_remove_binlog_ssl_material; CALL mysql.rds_reset_external_master;

Now you need to pass the client certificate data as a JSON to AWS Aurora.

CALL mysql.rds_import_binlog_ssl_material('{“ssl_ca”:"—–BEGIN CERTIFICATE—– MIIBAgMBVRpcm9sMRIw… … —–END CERTIFICATE—–\n”,“ssl_cert”:"—–BEGIN CERTIFICATE—– KAoIBAQCzn28awhyN8V56Z2bskCiMhJt4 … —–END CERTIFICATE—–\n”,“ssl_key”:"—–BEGIN RSA PRIVATE KEY—– SbeLNsRzrPoCVGGqwqR6gE6AZu … —–END RSA PRIVATE KEY—–"}');

A message that the SSL data was accepted will appear if you pasted the certificate, the key and the CA certificate correctly.

Finally, start the replication and check the status

CALL mysql.rds_start_replication; SHOW SLAVE STATUS\G```

Tests and Troubleshooting

On the master, you can check if the slave even tries to connect for instance with tcpdump. In the example below the IP would be the AWS gateway address as seen by your firewall.

sudo tcpdump src host -vv```

<div class="twttr_buttons">
  <div class="twttr_twitter">
    <a href="" class="twitter-share-button" data-via="" data-hashtags=""  data-size="default" data-url=""  data-related="" target="_blank">Tweet</a>
  <div class="twttr_followme">
    <a href="" class="twitter-follow-button" data-show-count="true" data-size="default"  data-show-screen-name="false"  target="_blank">Follow me</a>