Allgemein

Demystifying MySQL User Management (Part 1)

Managing user accounts can be a bit confusing, when working with MySQL. The two main commands in that context: CREATE USER and GRANT. Obviously, CREATE user is for adding a new user while GRANT is used for assigning permissions.

Pre-requisits: Docker Compose

For testing access to a MySQL instance, we can use Docker for running said instance and two different client containers. The docker-compose snipped below orchestrates those three containers and puts them on a shared network.

version: '3'
services:
  mysql-db:
    image: mysql:8
    container_name: mysql-8-server
    restart: unless-stopped
    command: --default-authentication-plugin=mysql_native_password
    environment:
      MYSQL_DATABASE: 'world'
      MYSQL_ALLOW_EMPTY_PASSWORD: 'yes'      
    ports:
      - '3308:3306'
    expose:
      - '3306'      
    volumes:
      - my-data-volume:/var/lib/mysql
      - ./demo_data/world.sql:/docker-entrypoint-initdb.d/world.sql:ro
    networks:
      mysql-test-net:
        ipv4_address: 172.28.1.2
  mysql-client-1:
    image: alpine-mysql:latest
    container_name: client-1
    restart: unless-stopped
    command: tail -f /dev/null
    networks:
      mysql-test-net:
        ipv4_address: 172.28.1.3
  mysql-client-2:
      image: alpine-mysql:latest
      container_name: client-2
      restart: unless-stopped
      command: tail -f /dev/null
      networks:
        mysql-test-net:
          ipv4_address: 172.28.1.4
volumes:
  my-data-volume:
networks:
    mysql-test-net:
        ipam:
            driver: default
            config:
                - subnet: 172.28.0.0/16

The docker-compose script is pretty self explainatory. We use a MySQL 8 container, that allows connecting with the root user and an empty password. We persist the data of the MySQL database in a seperate data volume called my-data-volume and we also load the example database called world which we downloaded before.

In order to have a nice and simple image for the clients., we can use the Alpine image and pre-install the MySQL client inside it, by using the following Dockerfile.

FROM alpine:3.7
RUN apk add --no-cache mysql-client

You need to build it once on your host with the command

docker build --tag alpine-mysql:latest .

After starting docker-compose with docker-compose up, We end up with three containers:

  • mysql-8-server
  • client-1
  • client-2

MySQL User Accounts, Access and Permissions

In this blog post, I would like to highlight some issues that arise when you add accounts for users that can connect from different hosts. Our docker container accepts connections for root from any host. In the wild, this is of course bad practice and we are going to change that as soon as we have added a second account.

Connect to the MySQL server by starting the MySQL client inside the container client-1 and point it to the right host by providing the hostname of the database server. Docker resolves the container names for us nicely, because all three containers are inside the same network. In order to improve the readability, we also changed the prompt so that it is more obvious from which client we are connecting to.

$ docker exec -it client-1 mysql -uroot -h mysql-8-server --prompt "Client 1 >"
Welcome to the MariaDB monitor.  Commands end with ; or \g.
Your MySQL connection id is 9
Server version: 8.0.12 MySQL Community Server - GPL

Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

Client 1 >

The Root User

Lets make our example more secure by allowing root to connect only from the machine where the database instance is running at. Being still connected from client-1 as root, we can change the permissions. But lets have a look what users are there anyways.

Client 1 >SELECT User, Host, authentication_string  FROM mysql.user\G
*************************** 1. row ***************************
                 User: root
                 Host: %
authentication_string: 
*************************** 2. row ***************************
                 User: mysql.infoschema
                 Host: localhost
authentication_string: $A$005$THISISACOMBINATIONOFINVALIDSALTANDPASSWORDTHATMUSTNEVERBRBEUSED
*************************** 3. row ***************************
                 User: mysql.session
                 Host: localhost
authentication_string: $A$005$THISISACOMBINATIONOFINVALIDSALTANDPASSWORDTHATMUSTNEVERBRBEUSED
*************************** 4. row ***************************
                 User: mysql.sys
                 Host: localhost
authentication_string: $A$005$THISISACOMBINATIONOFINVALIDSALTANDPASSWORDTHATMUSTNEVERBRBEUSED
*************************** 5. row ***************************
                 User: root
                 Host: localhost
authentication_string: 
5 rows in set (0.00 sec)

As we can see there are five preinstalled accounts The users mysql.infoschema, mysql.session and mysql.sys are internal accounts that cannot be used for client connections. Do not delete them though, they are needed for internal tasks. Then there are two root accounts: ‘root’@’%’ and ‘root’@’localhost’. The first one is the problematic one, because with the wildcard ‘%’, clients can connect from any remote computer. We definetely do not want that.

So lets drop it:

DROP USER 'root'@'%';

When you now run the SELECT query again, you will notice that it still works. Even after you run FLUSH PRIVILEGES. The reason is that the server will verify the connection settings only after the next time the client connects. So lets exit the session and try to connect again.

$ docker exec -it client-1 mysql -uroot -h mysql-8-server --prompt "Client 1 >"
ERROR 1130 (HY000): Host '172.28.1.3' is not allowed to connect to this MySQL server

As expected, we cannot connect from a remote server anymore. Lets login using MySQL container.

docker exec -it mysql-8-server  mysql -uroot -h localhost --prompt "Server >"
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 11
Server version: 8.0.12 MySQL Community Server - GPL

Copyright (c) 2000, 2018, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

Server >

Note how we are now connecting to the mysql-8-server and using localhost as the hostname.

Localhost vs 127.0.0.1

In the example above we connected using localhost as the hostname. When we try the same with the IP address 127.0.0.1 of the loopback interface, which is often seen as synonymous, it does not work anymore.

docker exec -it mysql-8-server  mysql -uroot -h 127.0.0.1 --prompt "Server >"ERROR 1130 (HY000): Host '127.0.0.1' is not allowed to connect to this MySQL server

This is confusing, after all (after using apt-get install inetutils-ping for installing the tool inside the MySQL container), we can verify that localhost resolves to 127.0.0.1.

But there is more to that. First of all, MySQL can use hostname resolution and would make a DNS request to get the IP address from a hostname. As this can be pretty slow, it is turned off by default, as you can verify like this:

Server >show variables like 'skip_name_resolve';
+-------------------+-------+
| Variable_name     | Value |
+-------------------+-------+
| skip_name_resolve | ON    |
+-------------------+-------+
1 row in set (0.00 sec)

Further more, localhost is a special hostname for MySQL. You can connect to MySQL either by using a TCP connection or a file socket. The file socket only works on the same machine as the server is running and it is the faster method. This is why MySQL treats connections coming from localhost per default as socket connections. If you want to use 127.0.0.1, you can add this as a second host for the existing user using GRANT.

Server >GRANT ALL PRIVILEGES ON *.* TO 'root'@'127.0.0.1';
ERROR 1410 (42000): You are not allowed to create a user with GRANT

Now thats weird? After all, we are root and should be allowed to do anything, but we see this error message above. The reason is that there does not yet exist an account for ‘root’@’127.0.0.1’ yet.

Server >CREATE USER 'root'@'127.0.0.1' IDENTIFIED BY 'super-secret';
Query OK, 0 rows affected (0.07 sec)

Server >GRANT ALL PRIVILEGES ON *.* to 'root'@'127.0.0.1';
Query OK, 0 rows affected (0.07 sec)

Server >GRANT GRANT OPTION ON *.* to 'root'@'127.0.0.1';
Query OK, 0 rows affected (0.08 sec)

You always need to create a user account first before setting permissions with GRANT.

While we are at it, we can also set a password for root. Note how we set the password for both root accounts. Accounts are always identified with ‘username’@’hostname’.

Server >ALTER USER 'root'@'localhost' IDENTIFIED BY 'super-secret';
Server >ALTER USER 'root'@'127.0.0.1' IDENTIFIED BY 'super-secret';

Adding Users

Let’s logon with our new credentials again and create a new user called bob and add some permissions for the world database.

CREATE USER 'bob' IDENTIFIED BY 'secret';

What looks pretty harmless adds an account for the user called bob. Without adding a host, MySQL simply adds the wildcard for hosts and allows therefore connecting to the MySQL instance from any IP address. You can verify it with the follwing command that I am going to use more often in the course of this post.

Server >SHOW GRANTS FOR 'bob';
+---------------------------------+
| Grants for bob@%                |
+---------------------------------+
| GRANT USAGE ON *.* TO `bob`@`%` |
+---------------------------------+
1 row in set (0.00 sec)

Thats not what we had in mind. Lets delete bob’s account again and create it with a proper host.

DROP USER 'bob';
CREATE USER 'bob'@'172.28.1.3' IDENTIFIED BY 'secret';

Note how we limited the account to allow connections only from Client 1, by provoding its IP. You can’t connect from Client 2.

stefan@stefan-desktop:~/Docker-Projects/MySQL-Demo$ docker exec -it client-1 mysql -ubob -h mysql-8-server -psecret --prompt "Client 1 >"
Welcome to the MariaDB monitor.  Commands end with ; or \g.
Your MySQL connection id is 14
Server version: 8.0.12 MySQL Community Server - GPL

Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

Client 1 >exit
Bye
stefan@stefan-desktop:~/Docker-Projects/MySQL-Demo$ docker exec -it client-2 mysql -ubob -h mysql-8-server -psecret --prompt "Client 2 >"
ERROR 1045 (28000): Access denied for user 'bob'@'172.28.1.4' (using password: YES)

This are the very basics of MySQL account management. In the future I plan to go into more details.

Percona Xtrabackup – Some General Tips

When you stream Xtrabackup to agzipped tar file, it also contains the information about the binary log position where the server was at when the backup was started. This information is needed to start the lave at the same position again

time nice -n 15 xtrabackup --backup --slave-info --safe-slave-backup  --stream=tar | gzip -c > /var/backup/percona.dump.tar.gz</code>```


In order to get that position without extracting the archive file, you can use the following command

tar -xOf /var/backup/percona.dump.tar.gz xtrabackup_binlog_info```

You can list all files in the archive like this:

 tar -ztvf percona.dump.tar.gz </code>```


Note that both operations can take quite some time and resources.

Lightning fast integration tests with Docker, MySQL and tmpfs

Integration tests that involve database operations requite to tear down and re-initialize a database multiple times. Although most developer machines offer SSD and enough RAM, database initialization can consume a considerable amount of time nevertheless.

Docker allows defining volumes that are mounted directly into the memory by using tmpfs. We can utilize this feature to utilize the speed up disk operations as during database imports, by moving the data from the disk into the memory.

The following example measures the time for writing 1GB worth of data to an SSD:

dd if=/dev/zero of=/tmp/output bs=1024k count=1024; 
1024+0 Datensätze ein
1024+0 Datensätze aus
1073741824 bytes (1,1 GB, 1,0 GiB) copied, 2,22033 s, 484 MB/s</code>```


For comparison, the following steps create a RAM disk and write the data to memory.

$ sudo mount -o size=1G -t tmpfs none /tmp/tmpfs

$ dd if=/dev/zero of=/tmp/tmpfs/output bs=1024k count=1024; 1024+0 Datensätze ein 1024+0 Datensätze aus 1073741824 bytes (1,1 GB, 1,0 GiB) copied, 0,309017 s, 3,5 GB/s



As you can see writing 1GB to memory is 7x faster. With the following Docker run command, you can spin-up a default MySQL container, where the data directory resides in a tmpfs.

docker run -d
–rm
–name mysql-56-in-memory
-p 3307:3306
–tmpfs /var/lib/mysql:rw,noexec,nosuid,size=1024m
-e MYSQL_ALLOW_EMPTY_PASSWORD=TRUE
-e MYSQL_DATABASE=dev
mysql:5.6



The arguments of Docker run mean the following

  * &#8211;rm: delete the container once it was stopped
  * &#8211;name: a name for the container
  * -p: map the host&#8217;s port 3307 to the port 3306 inside the container. This allows to run multiple MySQL containers in parallel and connect to them from the host via the port specified
  * &#8211;tmpfs: This line mounts the internal directory of the container to a RAM disk. It should be writeable (rm). Noexec prevents the execution of binaries, nosuid prevents changing the permission flags and the size specifies the size occupied by the tmpfs partition in memory. Adapt this to your usecase. The minimum for MySQL is around 200MB. Add the space needed for your data, indices etc.
  * MYSQL\_ALLOW\_EMPTY_PASSWORD does what it implies
  * MYSQL_DATABASE defines the name of a database to be created

If you run this command you can connect to the container like this: _mysql -u root -h 127.0.0.1 -P 3307_

The container behaves like a normal MySQL database, unless the data is not persisted on a hard disk, but only stored in the ephemeral memory. If you stop the container, it will be removed by docker and if you reboot the machine the data will be gone. for obvious reasons this is only a good idea for test data that can be re-created at any time.

You can achieve the same also with Docker Compose if you would like to orchestrate multiple containers.

version: ‘3’ services: mysql-56-integration: container_name: mysql-56-in-memory restart: unless-stopped image: mysql:5.6 environment: - MYSQL_ALLOW_EMPTY_PASSWORD='true’ - MYSQL_HOST=’’ volumes: - data:/var/cache ports: - “3307:3306”

volumes: data: driver_opts: type: tmpfs device: tmpfs```

Measure Replication Delay with Percona Toolkit Heartbeat (pt-heartbeat)

Replication lag occurs when the slave cannot keep up with the changes being made on the master instance. The reasons for a lag can be network congestion, poor performance on the slave or service interruptions of the slave mechanism, leading to a large pile of work to be catched up by the slave. When you useSHOW SLAVE STATUS you can monitor the current delay, which is indicated in the Seconds_Behind_Master: 0column. Ideally, this value is zero, but the smallest granularity you get for this delay counter is one second.

Percona offers a tool that allows measuring the delay more accurately, by writing timestamps in the master database and calculate the difference when they arrive at the slave: [pt-heartbeat][1]

On the Master

Create a dedicated database schema called percona and add the following table.

CREATE TABLE IF NOT EXISTS `percona`.`heartbeat` (
    ts                    varchar(26) NOT NULL,
    server_id             int unsigned NOT NULL PRIMARY KEY,
    file                  varchar(255) DEFAULT NULL,    -- SHOW MASTER STATUS
    position              bigint unsigned DEFAULT NULL, -- SHOW MASTER STATUS
    relay_master_log_file varchar(255) DEFAULT NULL,    -- SHOW SLAVE STATUS
    exec_master_log_pos   bigint unsigned DEFAULT NULL  -- SHOW SLAVE STATUS
);```


Then you can launch the script on the master, where it will write new timestamps periodically as long as the script is running.

pt-heartbeat -D percona –update –utc h=127.0.0.1,u=root,p=SECRET,P=3306```

The -D flag specifies the database schema where the table for the heartbeat data is stored. The –update command is needed on the master for updating the table and the last part is the DSN, which specifies host address, user name and password. Very important is the UTC flag –utc, that ensures that the timestamps will be interpreted as UTC, regardless of the timezone setting.

On the Slave

Create a user for reading the replicated heartbeat table like this:

GRANT REPLICATION SLAVE,PROCESS,SELECT ON percona.* TO `percona_checksum`@'10.20.30.40' IDENTIFIED BY 'SECRET';```


Then you can run the script and point it to the slave. It will output precise delay counts in fractions of seconds

pt-heartbeat h=127.0.0.1,u=percona_checksum,p=SECRET,P=3306 -D percona –monitor –utc –master-server-id 1```

Notice the different DSN, the –monitor flag and the master-server id, which needs to be the one of your master of course. You need this because the tool supports hierarchies of masters and therefore you would need to know which one is to be considered.

Results

The results will look similar to this

0.09s [  0.00s,  0.00s,  0.00s ]
0.02s [  0.20s,  0.00s,  0.00s ]
0.09s [  0.00s,  0.00s,  0.00s ]
0.03s [  0.02s,  0.00s,  0.00s ]
0.09s [  0.01s,  0.00s,  0.00s ]
0.09s [  0.01s,  0.00s,  0.00s ]
0.09s [  0.01s,  0.00s,  0.00s ]
0.08s [  0.01s,  0.00s,  0.00s ]
0.08s [  0.01s,  0.00s,  0.00s ]
0.10s [  0.01s,  0.00s,  0.00s ]
0.12s [  0.02s,  0.00s,  0.00s ]
0.08s [  0.01s,  0.00s,  0.00s ]
0.11s [  0.02s,  0.00s,  0.00s ]
0.08s [  0.02s,  0.00s,  0.00s ]
0.09s [  0.02s,  0.00s,  0.00s ]
0.08s [  0.02s,  0.00s,  0.00s ]
0.08s [  0.03s,  0.01s,  0.00s ]```


The output is the current delay followed by moving averages over 1m, 5m and 15m, as you might know from your favorite OS already.

Have a look at the [official documentation][1], there is a lot of options available.





<div class="twttr_buttons">
  <div class="twttr_twitter">
    <a href="http://twitter.com/share?text=Measure+Replication+Delay+with+Percona+Toolkit+Heartbeat+%28pt-heartbeat%29" class="twitter-share-button" data-via="" data-hashtags=""  data-size="default" data-url="https://blog.stefanproell.at/2018/12/06/measure-replication-delay-with-percona-toolkit-heartbeat-pt-heartbeat/"  data-related="" target="_blank">Tweet</a>
  </div>
  
  <div class="twttr_followme">
    <a href="https://twitter.com/@stefanproell" class="twitter-follow-button" data-show-count="true" data-size="default"  data-show-screen-name="false"  target="_blank">Follow me</a>
  </div>
</div>

 [1]: https://www.percona.com/doc/percona-toolkit/LATEST/pt-heartbeat.html

Setup AWS MySQL 5.6 Aurora as a Slave for an external Master with SSL

Setting up Aurora as a slave for an external MySQL server that acts as the master is a bit tricky. Of course we want a secured connection. For this reason we need to create client certificates to be used by AWS RDS. The steps below should work for RDS as well.

Generate and Sign the Certificates

The process is actually simple, but AWS is picky how you generate the certificates. I was using a SHA flag that was accepted by a regular MySQL 5.6 instance, but caused a cryptic (pun intended) MySQL 2026 Generic SSL error and it was quite hard to find the source. Also note that you need to have different common names (CN) for all three certificate pairs. They do not necessarily need to fit the actual domain name, but they need to be different. 

First we need to create the certificate authority that can sign the keys

# Generate a certificate authority key pair
openssl genrsa 2048 > ca-key.pem
# Notice the CN name. It needs to be different for all of the three key pairs that we create!
openssl req -new -x509 -nodes -days 3600 -key ca-key.pem -out ca.pem -subj "/C=AT/ST=Tirol/L=Innsbruck/O=The Good Corp/OU=IT Department/CN=ca.mysql"

Then create the server key pair

#Generate a server key. Note again the different CN
openssl req -newkey rsa:2048 -days 3600 -nodes -keyout server-key.pem -out server-req.pem -subj "/C=AT/ST=Tirol/L=Innsbruck/O=The Good Corp/OU=IT Department/CN=server.mysql"
# Convert the format
openssl rsa -in server-key.pem -out server-key.pem
# Sign it
openssl x509 -req -in server-req.pem -days 3600 -CA ca.pem -CAkey ca-key.pem -set_serial 01 -out server-cert.pem

Finally we generate a client certificate and its key. You can repeat these steps to generate multiple certificates for clients

# Again, note the CN
openssl req -newkey rsa:2048 -days 3600 -nodes -keyout client-key.pem -out client-req.pem -subj "/C=AT/ST=Tirol/L=Innsbruck/O=The Good Corp/OU=IT Department/CN=client.mysql"
# Convert
openssl rsa -in client-key.pem -out client-key.pem
# Sign
openssl x509 -req -in client-req.pem -days 3600 -CA ca.pem -CAkey ca-key.pem -set_serial 01 -out client-cert.pem
# Verify
openssl verify -CAfile ca.pem server-cert.pem client-cert.pem

Now we have all the certs we need.

Master Setup

The setup is pretty standard. Add the server certificates to the MySQL configuration of your master and restart.

# SSL Server Certificate
ssl-ca=/etc/mysql/ssl/ca.pem
ssl-cert=/etc/mysql/ssl/server-cert.pem
ssl-key=/etc/mysql/ssl/server-key.pem

Then create a user for the slave

CREATE USER 'aws'@'%' IDENTIFIED BY 'SECRET';
GRANT REPLICATION CLIENT, REPLICATION SLAVE ON *.* TO 'aws'@'%' IDENTIFIED BY 'SECRET' REQUIRE SSL;```


# Slave Setup

On AWS you do not have SUPER() privileges, but can use stored procedures provided by Amazon to setup the slave.

Start fresh by removing old records. If there was no previous setup, there might be an error.

CALL mysql.rds_remove_binlog_ssl_material; CALL mysql.rds_reset_external_master;



Now you need to pass the client certificate data as a JSON to AWS Aurora.

CALL mysql.rds_import_binlog_ssl_material('{“ssl_ca”:"—–BEGIN CERTIFICATE—– MIIBAgMBVRpcm9sMRIw… … —–END CERTIFICATE—–\n”,“ssl_cert”:"—–BEGIN CERTIFICATE—– KAoIBAQCzn28awhyN8V56Z2bskCiMhJt4 … —–END CERTIFICATE—–\n”,“ssl_key”:"—–BEGIN RSA PRIVATE KEY—– SbeLNsRzrPoCVGGqwqR6gE6AZu … —–END RSA PRIVATE KEY—–"}');



A message that the SSL data was accepted will appear if you pasted the certificate, the key and the CA certificate correctly.

Finally, start the replication and check the status

CALL mysql.rds_start_replication; SHOW SLAVE STATUS\G```

Tests and Troubleshooting

On the master, you can check if the slave even tries to connect for instance with tcpdump. In the example below the IP 1.2.3.4 would be the AWS gateway address as seen by your firewall.

sudo tcpdump src host 1.2.3.4 -vv```




<div class="twttr_buttons">
  <div class="twttr_twitter">
    <a href="http://twitter.com/share?text=Setup+AWS+MySQL+5.6+Aurora+as+a+Slave+for+an+external+Master+with+SSL" class="twitter-share-button" data-via="" data-hashtags=""  data-size="default" data-url="https://blog.stefanproell.at/2018/11/30/setup-aws-mysql-5-6-aurora-as-a-slave-for-an-external-master-with-ssl/"  data-related="" target="_blank">Tweet</a>
  </div>
  
  <div class="twttr_followme">
    <a href="https://twitter.com/@stefanproell" class="twitter-follow-button" data-show-count="true" data-size="default"  data-show-screen-name="false"  target="_blank">Follow me</a>
  </div>
</div>

Grafana and InfluxDB with SSL inside a Docker Container

Self-signed SSL certificates

On the host, create a directory for storing the self signed SSL certificates. This directory will be mounted in the Grafana container as well as in the InfluxDB container to /var/ssl. Create the self signed SSL certificates as follows:

mkdir -p /docker/ssl
cd /docker/ssl/
# Generate a private key
openssl genrsa -des3 -out server.key 1024
# Generate CSR
openssl req -new -key server.key -out server.csr
# Remove password
openssl rsa -in server.key -out server.key
# Generate self signed cert
openssl x509 -req -days 365 -in server.csr -signkey server.key -out server.crt
# Set permissions
sudo chmod 644 server.crt
sudo chmod 600 server.key

Next, create a config directory and create individual configuration files for Grafana and InfluxB: mkdir conf

Grafana

In the file ./conf/grafana/defaults.ini set the protocol to https and provide the paths to the mounted ssl directory in the container.

#################################### Server ##############################
[server]
# Protocol (http, https, socket)
protocol = https
...
...
# https certs & key file
cert_file = /var/ssl/server.crt
cert_key = /var/ssl/server.key
...```


## InfluxDB

The file ./conf/influxdb/influxdb.conf is also pretty simple. Add a [http] category and add the settings:

[meta] dir = “/var/lib/influxdb/meta” [data] dir = “/var/lib/influxdb/data” engine = “tsm1” wal-dir = “/var/lib/influxdb/wal” [http] https-enabled = true https-certificate =”/var/ssl/server.crt” https-private-key =”/var/ssl/server.key”



## Environment

You can set environment variables in <span class="lang:default decode:true crayon-inline ">env files</span> for the services.

### env.grafana

GF_INSTALL_PLUGINS=grafana-clock-panel,briangann-gauge-panel,natel-plotly-panel,grafana-simple-json-datasource



### env.influxdb

INFLUXDB_REPORTING_DISABLED=true INFLUXDB_DB= INFLUXDB_HTTP_AUTH_ENABLED=true INFLUXDB_ADMIN_USER=admin INFLUXDB_ADMIN_PASSWORD= INFLUXDB_USER= INFLUXDB_USER_PASSWORD= INFLUXDB_WRITE_USER= INFLUXDB_WRITE_USER_PASSWORD=



## Docker Compose

Now you can launch the service by using <span class="lang:default decode:true crayon-inline ">docker-compose up</span> for the following file. Note

version: ‘2’

services: influxdb: image: influxdb:latest container_name: influxdb ports: - “8083:8083” - “8086:8086” - “8090:8090” env_file: - ‘env.influxdb’ volumes: - data-influxdb:/var/lib/influxdb - /docker/ssl:/var/ssl - /docker/conf/influxdb/influxdb.conf:/etc/influxdb/influxdb.conf

grafana:
    image: grafana/grafana:latest
    container_name: grafana
    ports:
        - "3000:3000"
    links:
        - influxdb
    env_file:
        - 'env.grafana'
    volumes:
        - data-grafana:/var/lib/grafana
        - /docker/ssl:/var/ssl
        - /docker/conf/grafana/defaults.ini:/usr/share/grafana/conf/defaults.ini

volumes: data-influxdb: data-grafana:```

Lets Encrypt Setup

If you require valid certificates, you can also use certificates from lets encrypt.

First, create the certificates on the host:

certbot certonly --standalone --preferred-challenges http --renew-by-default -d iot.example.org```


Then use this docker-compose file.

version: ‘2’

services: influxdb: image: influxdb:latest container_name: influxdb ports: - “8083:8083” - “8086:8086” - “8090:8090” env_file: - ‘env.influxdb’ volumes: - data-influxdb:/var/lib/influxdb - /etc/letsencrypt/live/iot.example.org/fullchain.pem:/var/ssl/server.crt - /etc/letsencrypt/live/iot.example.org/privkey.pem:/var/ssl/server.key - /docker/conf/influxdb/influxdb.conf:/etc/influxdb/influxdb.conf

grafana:
    image: grafana/grafana:latest
    container_name: grafana
    ports:
        - "3000:3000"
    links:
        - influxdb
    env_file:
        - 'env.grafana'
    volumes:
        - data-grafana:/var/lib/grafana
        - /etc/letsencrypt/live/iot.example.org/fullchain.pem:/var/ssl/server.crt
        - /etc/letsencrypt/live/iot.example.org/privkey.pem:/var/ssl/server.key
        - /docker/conf/defaults.ini:/usr/share/grafana/conf/defaults.ini

volumes: data-influxdb: data-grafana:```

Unbrick a Netgear WNR3500L V2

It finally happened… After experimenting with alternative [firmware][1] for my WNR3500L router, I uploaded the wrong version and converted the device into a rather expensive paperweight. After some searching, I found this page, explaining how to revitalize the router by using a USB-TTL converter cable. You can get (e.g. [CP2102 Module Modul USB to TTL Converter)][2] for a few bucks Ebay, but expect a few weeks for the delivery from China.  If you follow the tutorial on the OpenRouters page closely, it should work out quite nicely.

Update: 2018-01-21

It happened again. I flashed an unsupported version of LEDE on the device and it did not want to boot again. I followed the same procedure as described [here][3], butthe router was complaining about a version missmatch of the device ID and the image ID. Turns out, I had to explicitly use binary mode for FTP. Here are the steps again.

  1. Connect the serial cables as shown
  2. Make sure router is off
  3. Launch minicom in a terminal and make sure the settings are valid for ttyUSB0
  4. Boot the router and immediately press Ctrl+C in the terminal
  5. Type tftpd in the prompt, the router should start a FTP server on 192.168.1.1
  6. Open a second terminal, cd into the directory where the firmware is
  7. Make sure its the right version 🙂
  8. Start the FTP client: ftp 192.168.1.1
  9. Change to binary mode (this is very important): mode binary
  10. Move the file: put FIRMWARE.chr
  11. The router should detect the file and process it
  12. Reboot and enjoy

Connecting a Serial TTYUSB Device to the Router

[][4]

When you see this message below, you need to abort the boot process.

Willkommen zu minicom 2.7

Optionen: I18n 
Übersetzt am Feb  7 2016, 13:37:27.
Port /dev/ttyUSB0, 17:47:29

Drücken Sie CTRL-A  Z für Hilfe zu speziellen Tasten
Found a Samsung NAND flash with 2048B pages or 128KB blocks; total size 128MB
Decompressing...done


CFE for WNR3500Lv2 version: v1.0.9
Build Date: Fri May  6 11:54:17 CST 2011 
Init Arena
Init Devs.
Boot partition size = 262144(0x40000)
NFLASH Boot partition size = 524288(0x80000)
et0: Broadcom BCM47XX 10/100/1000 Mbps Ethernet Controller 5.60.136 
CPU type 0x19749: 480MHz
Tot mem: 131072 KBytes

Device eth0:  hwaddr 84-1B-5E-4E-FF-84, ipaddr 192.168.1.1, mask 255.255.255.0
        gateway not set, nameserver not set
Checking crc...done.
Loader:raw Filesys:raw Dev:flash0.os File: Options:(null)
Loading: .. 3848 bytes read
Entry at 0x80001000
Closing network.
Starting program at 0x80001000

When pressing Ctrl+C does not work, make sure your minicom settings look like this:

A - Serieller Anschluss      : /dev/ttyUSB0                           |
B - Pfad zur Lockdatei       : /var/lock                              |
C - Programm zur Rufannahme  :                                        |
D - Programm zum Wählen      :                                        |
E - Bps/Par/Bits             : 115200 8N1                             |
F - Hardware Flow Control    : Nein                                   |
G - Software Flow Control    : Nein```




<div class="twttr_buttons">
  <div class="twttr_twitter">
    <a href="http://twitter.com/share?text=Unbrick+a+Netgear+WNR3500L+V2" class="twitter-share-button" data-via="" data-hashtags=""  data-size="default" data-url="https://blog.stefanproell.at/2018/01/21/unbrick-a-netgear-wrt3500l-v2/"  data-related="" target="_blank">Tweet</a>
  </div>
  
  <div class="twttr_followme">
    <a href="https://twitter.com/@stefanproell" class="twitter-follow-button" data-show-count="true" data-size="default"  data-show-screen-name="false"  target="_blank">Follow me</a>
  </div>
</div>

 [1]: http://tomatousb.org/mods
 [2]: http://rover.ebay.com/rover/0/e11400.m1842.l3160/7?euid=c40a3d49f9c34424a8772822b99d9e6b&loc=http%3A%2F%2Fcgi.ebay.de%2Fws%2FeBayISAPI.dll%3FViewItem%26item%3D280736990675%26ssPageName%3DADME%3AL%3AOC%3ADE%3A3160
 [3]: https://www.myopenrouter.com/article/how-install-third-party-firmware-netgear-wnr3500lv2-serial-cable-linux
 [4]: ./media/2013/04/IMG_20130405_171101.jpg

Flashing a OnePlus One from CM to LineageOS

As Cyanogen Inc closed down its operations in December 2016, CyanogenMod was affected too and my OnePlus One (OPO) did not receive updates anymore. This is not ideal, as new and old bugs have will remain. For instance did my phone often not reconnect to the 4G network, when a wifi connection was lost, This was very annoying. For this reason, if was about time to upgrade to a new OS: LineageOS.

LineageOS

[LineageOS][1] is a fork of CyanogenMod and contunes the quite successful project for our benefit. [LineageOS][2]14 is compatible with Android 7.1 and very easy to install. These are the steps which I had to follow.

How to Upgrade

  • Create a backup with Helios. Use the [Chrome Helium app][3] if the app on your mobile phone refuses to start.
  • Download and install [adb][4]
  • Download the TWRP custom ROM: https://twrp.me/Devices/
  • Download [LineageOS][5]. The codename for the OPO is bacon.
  • Download the [Google apps mini distribution][6] (stock is too large) [here][7].
  • Enable developer tools and connect the phone with a USB cable
  • Reboot the device with adb:./adb reboot bootloader 
  • Check if the device is recognized:./fastboot devices 
  • Enable OEM unlock:fastboot oem unlock 
  • Install the custom rom:fastboot flash recovery twrp-x.x.x-x-bacon.img 
  • Reboot into the new ROM:With the device powered down, hold the Volume Down and Power buttons.

Copy the files to the device

# Lineage
adb push ~/Downloads/OPO-Upgrade/lineage-14.1-20170918-nightly-bacon-signed.zip /sdcard/
# Google Apps
adb push ~/Downloads/OPO-Upgrade/open_gapps-arm-7.1-mini-20170924.zip /sdcard/```


Install both zip files by selecting first the LineageOS and then the Google Apps Zip file

Thats it. Reboot and begin with the setup or restore the backup.

## Update 28.09.2017

The problem that the phone would not reconnect to 3G/4G again after losing the Wifi signal still persisted with LineageOS. A friend recommended flashing the firmware of the device. After installing the version<span id="ERrNAupH" class="rHaZZnshWI">2016_1-25_.4.0.1.c7-00011</span> downloaded from [here][8] solved the issue for now. No more connection problems so far

<div class="twttr_buttons">
  <div class="twttr_twitter">
    <a href="http://twitter.com/share?text=Flashing+a+OnePlus+One+from+CM+to+LineageOS" class="twitter-share-button" data-via="" data-hashtags=""  data-size="default" data-url="https://blog.stefanproell.at/2017/09/24/flashing-a-oneplus-one-from-cm-to-lineageos/"  data-related="" target="_blank">Tweet</a>
  </div>
  
  <div class="twttr_followme">
    <a href="https://twitter.com/@stefanproell" class="twitter-follow-button" data-show-count="true" data-size="default"  data-show-screen-name="false"  target="_blank">Follow me</a>
  </div>
</div>

 [1]: https://en.wikipedia.org/wiki/LineageOS
 [2]: https://lineageos.org/
 [3]: https://chrome.google.com/webstore/detail/helium-backup/gpglbgbpeobllokpmeagpoagjbfknanl
 [4]: https://wiki.lineageos.org/adb_fastboot_guide.html
 [5]: https://download.lineageos.org/bacon
 [6]: https://wiki.lineageos.org/gapps.html#installation
 [7]: http://opengapps.org/?api=7.1&variant=nano
 [8]: https://forum.xda-developers.com/oneplus-one/general/firmware-modem-oneplus-one-lollipop-t3300907

Verifying Replication Consistency with Percona’s pt-table-checksum

Replication is an important concept for improving database performance and security. In this blog post, I would like to demonstrate how the consistency between a MySQL master and a slave can be verified. We will create two Docker containers, one for the master one for the slave.

Installing the Percona Toolkit

The Percona Toolkit is a collection of useful utilities, which can be obained for free from the company’s portal. The following commands install the prerequisits, download the package and eventually the package.

sudo apt-get install -y wget libdbi-perl libdbd-mysql-perl libterm-readkey-perl libio-socket-ssl-perl
wget https://www.percona.com/downloads/percona-toolkit/3.0.4/binary/debian/xenial/x86_64/\
    percona-toolkit_3.0.4-1.xenial_amd64.deb
sudo dpkg -i percona-toolkit_3.0.4-1.xenial_amd64.deb 

Setting up a Test Environment with Docker

The following command creates and starts a docker container. Note that these are minimal examples and are not suitable for a serious environment.

docker run --name mysql_master -e MYSQL_ALLOW_EMPTY_PASSWORD=true -d mysql:5.6 --log-bin \
   --binlog-format=ROW --server-id=1```


Get the IP address from the master container:

Get the IP of the master

docker inspect mysql_master | grep IPAddress

“SecondaryIPAddresses”: null, “IPAddress”: “172.17.0.2"```

You can connect to this container like this and verify the server id:

stefan@Lenovo ~/Docker-Projects $ mysql -u root -h 172.17.0.2
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 1
Server version: 5.6.35-log MySQL Community Server (GPL)

Copyright (c) 2000, 2017, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql> show variables like 'server_id';
+---------------+-------+
| Variable_name | Value |
+---------------+-------+
| server_id     | 1     |
+---------------+-------+
1 row in set (0,00 sec)

We repeat the command for the slave, but use a different id. port and name:

docker run --name mysql_slave -e MYSQL_ALLOW_EMPTY_PASSWORD=true -d mysql:5.6 --server-id=2```


For simplicity, we did not use Docker links, but will rather use IP addresses assigned by Docker directly.

## Replication Setup

First, we need to setup a user with replication privileges. This user will connect from the slave to the master.

On the host, interact with the master container

Get the IP address of the slave container

$ docker inspect mysql_slave | grep IPAddress “SecondaryIPAddresses”: null, “IPAddress”: “172.17.0.3”, “IPAddress”: “172.17.0.3”,

Login to the MySQL console of the master

Grant permissions

GRANT REPLICATION SLAVE ON . TO replication@‘172.17.0.3’ IDENTIFIED BY ‘SLAVE-SECRET’;

Get the current binlog position

mysql> SHOW MASTER STATUS; +——————-+———-+————–+——————+——————-+ | File | Position | Binlog_Do_DB | Binlog_Ignore_DB | Executed_Gtid_Set | +——————-+———-+————–+——————+——————-+ | mysqld-bin.000002 | 346 | | | | +——————-+———-+————–+——————+——————-+ 1 row in set (0,00 sec)```

Now log into the slave container and add the connection details for the master:

## Connect to the MySQL Slave instance
$ mysql -u root -h 172.17.0.3

### Setup the slave

mysql> CHANGE MASTER TO   
  MASTER_HOST='172.17.0.2',
  MASTER_PORT=3306,
  MASTER_USER='replication', 
  MASTER_PASSWORD='SLAVE-SECRET',
  MASTER_LOG_FILE='mysqld-bin.000002', 
  MASTER_LOG_POS=346;
Query OK, 0 rows affected, 2 warnings (0,05 sec)

### Start and check
mysql>   start slave;
Query OK, 0 rows affected (0,01 sec)

mysql> show slave status \G
*************************** 1. row ***************************
               Slave_IO_State: Waiting for master to send event
                  Master_Host: 172.17.0.2
                  Master_User: percona
                  Master_Port: 3306
                Connect_Retry: 60
              Master_Log_File: mysqld-bin.000002
          Read_Master_Log_Pos: 346
               Relay_Log_File: mysqld-relay-bin.000002
                Relay_Log_Pos: 284
        Relay_Master_Log_File: mysqld-bin.000002
             Slave_IO_Running: Yes
            Slave_SQL_Running: Yes

Now our simple slave setup is running.

Get some test data

Lets download the Sakila test database and import it into the master. It will be replicated immediately.

wget http://downloads.mysql.com/docs/sakila-db.tar.gz
~/Docker-Projects $ tar xvfz sakila-db.tar.gz 

mysql -u root -h 172.17.0.2 < sakila-db/sakila-schema.sql 
mysql -u root -h 172.17.0.2 < sakila-db/sakila-data.sql```


Verify that the data is on the slave as well:

mysql -u root -h 172.17.0.3 -e “USE sakila;SHOW TABLES;” +—————————-+ | Tables_in_sakila | +—————————-+ | actor | | actor_info | | address | | category | | city | | country | | customer | … | store | +—————————-+



After our setup is completed, we can proceed with Percona pt-table checksum.

# Percona pt-table-checksum

The Percona pt-table-checksum tool requires the connection information of the master and the slave in a specific format. This is called the DSN (data source name), which is a coma separated string. We can store this information in a dedicated database called percona in a table called dsns. We create this table on the master. Note that the data gets replicated to the slave within the blink of an eye.

CREATE DATABASE percona; USE percona;

CREATE TABLE DSN-Table ( id int(11) NOT NULL AUTO_INCREMENT, dsn varchar(255) NOT NULL, PRIMARY KEY (id) );



The next step involves creating permissions on the slave and the master!

GRANT REPLICATION SLAVE,PROCESS,SUPER, SELECT ON . TO ‘percona’@‘172.17.0.1’ IDENTIFIED BY ‘SECRET’; GRANT ALL PRIVILEGES ON percona.* TO ‘percona’@‘172.17.0.1’;```

The percona user is needed to run the script. Note that the IP address is this time from the (Docker) host, having the IP 172.17.0.1 by default. In real world scenarios, this script would either be run on the master or on the slave directly.

Now we need to add the information about the slave to the table we created. The Percona tool could also read this from the process list, but it is more reliable if we add the information ourselves. To do so, we add a record to the table we just created, which describes the slave DSN:

INSERT INTO percona.DSN-Table VALUES (1,'h=172.17.0.3,u=percona,p=SECRET,P=3306');```


The pt-table-checksum tool the connects to the master instance and the the slave. It computes checksums of all databases and tables and compares results. You can use the tool like this:

pt-table-checksum –replicate=percona.checksums –create-replicate-table –empty-replicate-table
–recursion-method=dsn=t=percona.DSN_Table -h 172.17.0.2 -P 3306 -u percona -pSECRET TS ERRORS DIFFS ROWS CHUNKS SKIPPED TIME TABLE 09-10T10:13:11 0 0 0 1 0 0.020 mysql.columns_priv 09-10T10:13:11 0 0 3 1 0 0.016 mysql.db 09-10T10:13:11 0 0 0 1 0 0.024 mysql.event 09-10T10:13:11 0 0 0 1 0 0.014 mysql.func 09-10T10:13:11 0 0 40 1 0 0.026 mysql.help_category 09-10T10:13:11 0 0 614 1 0 0.023 mysql.help_keyword 09-10T10:13:11 0 0 1224 1 0 0.022 mysql.help_relation 09-10T10:13:12 0 0 585 1 0 0.266 mysql.help_topic 09-10T10:13:12 0 0 0 1 0 0.031 mysql.ndb_binlog_index 09-10T10:13:12 0 0 0 1 0 0.024 mysql.plugin 09-10T10:13:12 0 0 6 1 0 0.287 mysql.proc 09-10T10:13:12 0 0 0 1 0 0.031 mysql.procs_priv 09-10T10:13:12 0 1 2 1 0 0.020 mysql.proxies_priv 09-10T10:13:12 0 0 0 1 0 0.024 mysql.servers 09-10T10:13:12 0 0 0 1 0 0.017 mysql.tables_priv 09-10T10:13:12 0 0 1820 1 0 0.019 mysql.time_zone 09-10T10:13:12 0 0 0 1 0 0.015 mysql.time_zone_leap_second 09-10T10:13:12 0 0 1820 1 0 0.267 mysql.time_zone_name 09-10T10:13:13 0 0 122530 1 0 0.326 mysql.time_zone_transition 09-10T10:13:13 0 0 8843 1 0 0.289 mysql.time_zone_transition_type 09-10T10:13:13 0 1 4 1 0 0.031 mysql.user 09-10T10:13:13 0 0 1 1 0 0.018 percona.DSN_Table 09-10T10:13:13 0 0 200 1 0 0.028 sakila.actor 09-10T10:13:13 0 0 603 1 0 0.023 sakila.address 09-10T10:13:13 0 0 16 1 0 0.033 sakila.category 09-10T10:13:13 0 0 600 1 0 0.023 sakila.city 09-10T10:13:13 0 0 109 1 0 0.029 sakila.country 09-10T10:13:14 0 0 599 1 0 0.279 sakila.customer 09-10T10:13:14 0 0 1000 1 0 0.287 sakila.film 09-10T10:13:14 0 0 5462 1 0 0.299 sakila.film_actor 09-10T10:13:14 0 0 1000 1 0 0.027 sakila.film_category 09-10T10:13:14 0 0 1000 1 0 0.032 sakila.film_text 09-10T10:13:14 0 0 4581 1 0 0.276 sakila.inventory 09-10T10:13:15 0 0 6 1 0 0.030 sakila.language 09-10T10:13:15 0 0 16049 1 0 0.303 sakila.payment 09-10T10:13:15 0 0 16044 1 0 0.310 sakila.rental 09-10T10:13:15 0 0 2 1 0 0.029 sakila.staff 09-10T10:13:15 0 0 2 1 0 0.020 sakila.store



The result shows a difference in the MySQL internal table for permissions. This is obviously not what we are interested in, as permissions are individual to a host. So we rather exclude the MySQL internal database and also the percona database, because it is not what we are interested in. Also in order to test it the tool works, we delete the last five category assignments from the table with <span class="lang:default decode:true crayon-inline">mysql -u root -h 172.17.0.3 -e &#8220;DELETE FROM sakila.film_category WHERE film_id > 995;&#8221;</span> and update a row in the city table with&nbsp;

mysql -u root -h 172.17.0.3 -e “update sakila.city SET city='Innsbruck’ WHERE city_id=590;“```

Now execute the command again:

pt-table-checksum --replicate=percona.checksums --create-replicate-table --empty-replicate-table \
   --recursion-method=dsn=t=percona.DSN_Table --ignore-databases mysql,percona -h 172.17.0.2 -P 3306 -u percona -pSECRET
            TS ERRORS  DIFFS     ROWS  CHUNKS SKIPPED    TIME TABLE
09-10T10:46:33      0      0      200       1       0   0.017 sakila.actor
09-10T10:46:34      0      0      603       1       0   0.282 sakila.address
09-10T10:46:34      0      0       16       1       0   0.034 sakila.category
09-10T10:46:34      0      1      600       1       0   0.269 sakila.city
09-10T10:46:34      0      0      109       1       0   0.028 sakila.country
09-10T10:46:34      0      0      599       1       0   0.285 sakila.customer
09-10T10:46:35      0      0     1000       1       0   0.297 sakila.film
09-10T10:46:35      0      0     5462       1       0   0.294 sakila.film_actor
09-10T10:46:35      0      1     1000       1       0   0.025 sakila.film_category
09-10T10:46:35      0      0     1000       1       0   0.031 sakila.film_text
09-10T10:46:35      0      0     4581       1       0   0.287 sakila.inventory
09-10T10:46:35      0      0        6       1       0   0.035 sakila.language
09-10T10:46:36      0      0    16049       1       0   0.312 sakila.payment
09-10T10:46:36      0      0    16044       1       0   0.320 sakila.rental
09-10T10:46:36      0      0        2       1       0   0.030 sakila.staff
09-10T10:46:36      0      0        2       1       0   0.027 sakila.store

You see that there is a difference in the tables sakila.city and in the table sakila.film_category. The tool does not report the actual number of differences, but rather the number of different chunks. To get the actual differences, we need to use a different tool, which utilises the checksum table that the previous step created.

Show the differences with pt-tabel-sync

The pt-table-sync tool is the counter part for the pt-table-checksum util. It can print or even replay the SQL statements that would render the slave the same state again to be in sync with the master. We can run a dry-run first, as the tool is potentially dangerous.

pt-table-sync --dry-run  --replicate=percona.checksums --sync-to-master h=172.17.0.3 -P 3306 \
   -u percona -pSECRET --ignore-databases mysql,percona
# NOTE: --dry-run does not show if data needs to be synced because it
#       does not access, compare or sync data.  --dry-run only shows
#       the work that would be done.
# Syncing via replication P=3306,h=172.17.0.3,p=...,u=percona in dry-run mode, without accessing or comparing data
# DELETE REPLACE INSERT UPDATE ALGORITHM START    END      EXIT DATABASE.TABLE
#      0       0      0      0 Chunk     08:57:51 08:57:51 0    sakila.city
#      0       0      0      0 Nibble    08:57:51 08:57:51 0    sakila.film_category

With –dry-run, you only see affected tables, but not the actual data because it does not really access the databases tables in question. Use –print additionally or instead of dry-run to get a list:

pt-table-sync --print --replicate=percona.checksums --sync-to-master h=172.17.0.3 -P 3306 \
  -u percona -pSECRET --ignore-databases mysql,percona
REPLACE INTO `sakila`.`city`(`city_id`, `city`, `country_id`, `last_update`) VALUES \
   ('590', 'Yuncheng', '23', '2006-02-15 04:45:25') 
  \ /*percona-toolkit src_db:sakila src_tbl:city  ...
REPLACE INTO `sakila`.`film_category`(`film_id`, `category_id`, `last_update`) VALUES ... 
REPLACE INTO `sakila`.`film_category`(`film_id`, `category_id`, `last_update`) VALUES ('997',... 
REPLACE INTO `sakila`.`film_category`(`film_id`, `category_id`, `last_update`) VALUES ('998', '11 ...
REPLACE INTO `sakila`.`film_category`(`film_id`, `category_id`, `last_update`) VALUES ('999', '3', ...
REPLACE INTO `sakila`.`film_category`(`film_id`, `category_id`, `last_update`) VALUES ('1000', '5', ... 

The command shows how we can rename back from Innsbruck to Yuncheng again and also provides the INSERT statements to get the deleted records back.When we replace –print with –execute, the data gets written to the master and replicated to the slave. To allow this, we need to set the permissions on the master

GRANT INSERT, UPDATE, DELETE ON sakila.* TO 'percona'@'172.17.0.1';
pt-table-sync --execute  --replicate=percona.checksums --check-child-tables \ 
  --sync-to-master h=172.17.0.3 -P 3306 -u percona -pSECRET --ignore-databases mysql,percona
REPLACE statements on sakila.city can adversely affect child table `sakila`.`address` 
   because it has an ON UPDATE CASCADE foreign key constraint. 
   See --[no]check-child-tables in the documentation for more information. 
   --check-child-tables error  while doing sakila.city on 172.17.0.3

This error indicates that updating the city table has consequences, because it is a FK to child tables. In this example, we are bold and ignore this warning. This is absolutely not recommended for real world scenarios.

pt-table-sync --execute  --replicate=percona.checksums --no-check-child-tables \
   --no-foreign-key-checks --sync-to-master h=172.17.0.3 -P 3306 -u percona -pSECRET \ 
   --ignore-databases mysql,percona

The command–no-check-child-tables ignores child tables and the command –no-foreign-key-checks ignores foreign keys.

Run the checksum command again to verify that the data has been restored:

pt-table-checksum --replicate=percona.checksums --create-replicate-table --empty-replicate-table \ 
   --recursion-method=dsn=t=percona.DSN_Table --ignore-databases mysql,percona 
   -h 172.17.0.2 -P 3306 -u percona -pSECRET

            TS ERRORS  DIFFS     ROWS  CHUNKS SKIPPED    TIME TABLE
09-10T11:24:42      0      0      200       1       0   0.268 sakila.actor
09-10T11:24:42      0      0      603       1       0   0.033 sakila.address
09-10T11:24:42      0      0       16       1       0   0.029 sakila.category
09-10T11:24:42      0      0      600       1       0   0.275 sakila.city
09-10T11:24:42      0      0      109       1       0   0.023 sakila.country
09-10T11:24:43      0      0      599       1       0   0.282 sakila.customer
09-10T11:24:43      0      0     1000       1       0   0.046 sakila.film
09-10T11:24:43      0      0     5462       1       0   0.284 sakila.film_actor
09-10T11:24:43      0      0     1000       1       0   0.036 sakila.film_category
09-10T11:24:43      0      0     1000       1       0   0.279 sakila.film_text
09-10T11:24:44      0      0     4581       1       0   0.292 sakila.inventory
09-10T11:24:44      0      0        6       1       0   0.031 sakila.language
09-10T11:24:44      0      0    16049       1       0   0.309 sakila.payment
09-10T11:24:44      0      0    16044       1       0   0.325 sakila.rental
09-10T11:24:44      0      0        2       1       0   0.029 sakila.staff
09-10T11:24:44      0      0        2       1       0   0.028 sakila.store

0 DIFFS, we are done!

Fixing Random Freezes with Ubuntu 16.04 LTS, Intel Skylake and an Nvidia GPU

My Lenovo ThinkCentre m900 (10FHCTO1WW) with an Intel i7-6700 showed weird and random freezes from day 1 when trying to install Mint 18 / Ubuntu 16 with any kernel newer than 3x. After investigating for quite some hours, I gave up and installed an Ubuntu 14.04 LTS on it. The device is certified to it, but the old version did not support all features and even some basic things such as audio did not work. At lest the random freezes were gone and I could work with that machine. Now that the system will not receive updates soon, I gave it another try and setup Mint 18.2 (Sonya). Unfortunately, the Lenovo machine froze again after a few minutes, filling up the log again with the following error messages.

Jul 12 18:49:01 FreezeCentre kernel: [ 4888.025547] NMI watchdog: BUG: soft lockup - CPU#6 stuck for 23s! [chrome:13814]
Jul 12 18:49:01 FreezeCentre kernel: [ 4888.025549] Modules linked in: bnep ipt_MASQUERADE nf_nat_masquerade_ipv4 nf_conntrack_netlink ...
Jul 12 18:49:01 FreezeCentre kernel: [ 4888.025614] CPU: 6 PID: 13814 Comm: chrome Not tainted 4.8.0-53-generic #56~16.04.1-Ubuntu
Jul 12 18:49:01 FreezeCentre kernel: [ 4888.025614] Hardware name: LENOVO 10FHCTO1WW/30BC, BIOS FWKT5FA   11/08/2016
Jul 12 18:49:01 FreezeCentre kernel: [ 4888.025615] task: ffff8fd736e12dc0 task.stack: ffff8fd71781c000
Jul 12 18:49:01 FreezeCentre kernel: [ 4888.025615] RIP: 0010:[<ffffffff90d0b339>]  [&lt;ffffffff90d0b339&gt;] smp_call_function_many+0x1f9/0x250
Jul 12 18:49:01 FreezeCentre kernel: [ 4888.025619] RSP: 0018:ffff8fd71781fc00  EFLAGS: 00000202
Jul 12 18:49:01 FreezeCentre kernel: [ 4888.025619] RAX: 0000000000000003 RBX: 0000000000000200 RCX: 0000000000000007
Jul 12 18:49:01 FreezeCentre kernel: [ 4888.025620] RDX: ffff8fd85dddd920 RSI: 0000000000000200 RDI: ffff8fd85dd9a288
Jul 12 18:49:01 FreezeCentre kernel: [ 4888.025620] RBP: ffff8fd71781fc38 R08: 0000000000000000 R09: 00000000000000bf
Jul 12 18:49:01 FreezeCentre kernel: [ 4888.025620] R10: 0000000000000008 R11: ffff8fd85dd9a288 R12: ffff8fd85dd9a288
Jul 12 18:49:01 FreezeCentre kernel: [ 4888.025621] R13: ffff8fd85dd9a280 R14: ffffffff90c723c0 R15: 0000000000000000
Jul 12 18:49:01 FreezeCentre kernel: [ 4888.025622] FS:  00007f6120196a80(0000) GS:ffff8fd85dd80000(0000) knlGS:0000000000000000
Jul 12 18:49:01 FreezeCentre kernel: [ 4888.025622] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
Jul 12 18:49:01 FreezeCentre kernel: [ 4888.025622] CR2: 00003a66b8d61000 CR3: 0000000449abd000 CR4: 00000000003406e0
Jul 12 18:49:01 FreezeCentre kernel: [ 4888.025623] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
Jul 12 18:49:01 FreezeCentre kernel: [ 4888.025623] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
Jul 12 18:49:01 FreezeCentre kernel: [ 4888.025624] Stack:
Jul 12 18:49:01 FreezeCentre kernel: [ 4888.025624]  000000000001a240 0100000000000001 ffff8fd6f7434d80 ffffffff90c723c0
Jul 12 18:49:01 FreezeCentre kernel: [ 4888.025626]  0000000000000000 ffff8fd71781fd10 ffff8fd71781fc68 ffff8fd71781fc60
Jul 12 18:49:01 FreezeCentre kernel: [ 4888.025627]  ffffffff90d0b46d ffff8fd6f7434d80 ffff8fd85ddd4508 ffff8fd71781fd08
Jul 12 18:49:01 FreezeCentre kernel: [ 4888.025628] Call Trace:
Jul 12 18:49:01 FreezeCentre kernel: [ 4888.025630]  [<ffffffff90c723c0>] ? leave_mm+0xd0/0xd0
Jul 12 18:49:01 FreezeCentre kernel: [ 4888.025631]  [<ffffffff90d0b46d>] on_each_cpu+0x2d/0x60
Jul 12 18:49:01 FreezeCentre kernel: [ 4888.025632]  [<ffffffff90c72c2b>] flush_tlb_kernel_range+0x4b/0x80
Jul 12 18:49:01 FreezeCentre kernel: [ 4888.025634]  [<ffffffff90de9f56>] __purge_vmap_area_lazy+0x2d6/0x320
Jul 12 18:49:01 FreezeCentre kernel: [ 4888.025635]  [<ffffffff90dea0b7>] vm_unmap_aliases+0x117/0x140
Jul 12 18:49:01 FreezeCentre kernel: [ 4888.025636]  [<ffffffff90c6e1ae>] change_page_attr_set_clr+0xee/0x4f0
Jul 12 18:49:01 FreezeCentre kernel: [ 4888.025638]  [<ffffffff90c6f21f>] set_memory_ro+0x2f/0x40
Jul 12 18:49:01 FreezeCentre kernel: [ 4888.025639]  [<ffffffff90d7f11a>] bpf_prog_select_runtime+0x2a/0xd0
Jul 12 18:49:01 FreezeCentre kernel: [ 4888.025641]  [<ffffffff9139a2af>] bpf_prepare_filter+0x37f/0x3f0
Jul 12 18:49:01 FreezeCentre kernel: [ 4888.025642]  [<ffffffff9139a47c>] bpf_prog_create_from_user+0xbc/0x120
Jul 12 18:49:01 FreezeCentre kernel: [ 4888.025643]  [<ffffffff90d43b30>] ? proc_watchdog_cpumask+0xe0/0xe0
Jul 12 18:49:01 FreezeCentre kernel: [ 4888.025644]  [<ffffffff90d4410e>] do_seccomp+0x12e/0x610
Jul 12 18:49:01 FreezeCentre kernel: [ 4888.025645]  [<ffffffff90c991c6>] ? SyS_prctl+0x46/0x490
Jul 12 18:49:01 FreezeCentre kernel: [ 4888.025646]  [<ffffffff90d446fe>] SyS_seccomp+0xe/0x10
Jul 12 18:49:01 FreezeCentre kernel: [ 4888.025648]  [<ffffffff9149a876>] entry_SYSCALL_64_fastpath+0x1e/0xa8
Jul 12 18:49:01 FreezeCentre kernel: [ 4888.025648] Code: 94 33 00 3b 05 ed 3a e5 00 89 c1 0f 8d 99 fe ff ff 48 98 49 8b ...```


I started the investigation again and found a different trace, which pointed to the graphics card. The important hint and solution came from [SO][1]. Following a few other forum posts, it became clear that the Nvidia drivers do not play nicely with recent kernels for some specific Nvidia cards ind combination with newer kernels. So I followed the proposed steps and disabled the card complete. Just removing the card in the BIOS and uninstalling the drivers was not enough. I also had to blacklist the modules for thenouveau kernel driver.

  1. Disable the Nvidia card in the BIOS and use the Intel onchip GPU
  2. Remove all Nvidia packages:  
    `sudo apt-get remove nvidia* && sudo apt autoremove`
  3. Blacklist the module:   
    `sudo vim /etc/modprobe.d/blacklist.conf`</p> <pre class=""><code>blacklist nouveau
blacklist lbm-nouveau
options nouveau modeset=0
alias nouveau off
alias lbm-nouveau off</code>```

    
    <pre class=""><code>echo options nouveau modeset=0 | sudo tee -a /etc/modprobe.d/nouveau-kms.conf</code>```

    
    <pre class=""><code>sudo update-initramfs -u</code>```


  4. Reboot

The card is not used any more and the freezes stopped.

*-display UNGEFORDERT Beschreibung: VGA compatible controller Produkt: GK208 [GeForce GT 720] Hersteller: NVIDIA Corporation Physische ID: 0 Bus-Informationen: pci@0000:01:00.0 Version: a1 Breite: 64 bits Takt: 33MHz Fähigkeiten: pm msi pciexpress vga_controller cap_list Konfiguration: latency=0```

I hope I do not have to remove this article again and the system remains as stable as it is now for six hours.

Using Hibernate Search with Spring Boot

Spring Boot is a framework, that makes it much easier to develop Spring based applications, by following a convention over configuration principle (while in contrast Spring critics claim that the framework’s principle is rather configuration over everything). In this article, I am going to provide an example how to achieve the following:

  • Create a simple Web application based on Spring Boot
  • Persist and access data with Hibernate
  • Make it searchable with Hibernate Search (Lucine)

I use Eclipse with a Gradle plugin for convenience. MySQL will be our back-end for storing the data. The full example can be obtained from my [Github Repository][1].

Bootstrapping: Create a Simple Spring Boot Webapp

The easiest way to start with Spring Boot is heading over to [start.spring.io][2] and create a new project. In this example, I will use Gradle for building the application and handling the dependencies and I add Web and JPA starters.

[][3]

Download the archive to your local drive and extract it to a folder. I called the project SearchaRoo.

Import the Project with Eclipse

Import it as an existing Gradle Project in Eclipse by using the default settings. You will end up with a nice little project structure as shown below:

[][4]

We have a central application starter class denoted SearchaRooAppication.java, package definitions, application properties and even test classes. The great thing with Spring Boot is that it is very simple to start and that you can debug it as every other local Java application. There is no need for remote debugging or complex application server setups.

Prepare the Database

We need a few permissions on our MySQL instance before we can start.

CREATE DATABASE spring_employees;
CREATE USER 'dev'@'localhost' IDENTIFIED BY 'sEcReT';
GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, RELOAD, FILE, REFERENCES, INDEX,
    ALTER, SHOW DATABASES, SUPER, LOCK TABLES, CREATE VIEW, SHOW VIEW 
    on spring_employees.* TO 'dev'@'localhost';
GRANT RELOAD on *.* TO 'dev'@'localhost';
FLUSH PRIVILEGES;```


We can then add the connection details into the application.properties file. We will edit this file several times when the complexity of this project increases.

===============================

= JPA / HIBERNATE

===============================

Specify the DBMS

spring.jpa.database = MYSQL

Show or not log for each sql query

spring.jpa.show-sql = true spring.datasource.url=jdbc:mysql://127.0.0.1/employees?createDatabaseIfNotExist=true spring.datasource.username=dev spring.datasource.password=sEcReT spring.datasource.driver-class-name=com.mysql.jdbc.Driver```

Now the basic database setup is done. We can then start adding model classes.

Getting some Employees on Board

MySQL offers a rather small but well documented sample database called employees, which is hosted on Github. Obtain and import the data as follows:

git clone https://github.com/datacharmer/test_db.git
cd test_db
mysql -u dev -p sEcReT < employees.sql

The script creates a new schema called employees and you will end up with a schema like this:

[][5]

In the course of this article, we are going to model this schema with Java POJOs by annotating the entities and the a appropriate fields with JPA.

Dependencies

Before we can start modelling the entities in Java, have a look at the Gradle build file. We include additional dependencies for the MySQL connector and Apache commons.

buildscript {
	ext {
		springBootVersion = '1.5.1.RELEASE'
	}
	repositories {
		mavenCentral()
	}
	dependencies {
		classpath("org.springframework.boot:spring-boot-gradle-plugin:${springBootVersion}")
	}
}

apply plugin: 'java'
apply plugin: 'eclipse'
apply plugin: 'org.springframework.boot'

jar {
	baseName = 'SearchaRoo'
	version = '0.0.1-SNAPSHOT'
}

sourceCompatibility = 1.8

repositories {
	mavenCentral()
}


dependencies {
	compile('org.springframework.boot:spring-boot-starter-data-jpa')
	compile('org.springframework.boot:spring-boot-starter-web')
	testCompile('org.springframework.boot:spring-boot-starter-test')
	compile("mysql:mysql-connector-java")
	compile('org.apache.commons:commons-lang3:3.5')
}

Modelling Reality

The next step covers modelling the data which we imported with Java POJOs. Obviously this is not the most natural way, because in general you would create the model first and then add data to it, but as we already had the data we decided to go in this direction. In the application.properties file, set the database to the imported employees database and set the Hibernate create property to validate. With this setting, we can confirm that we modelled the Java classed in accordance with the database model defined by the MySQL employees database.

An example of such a class is shown below, the other classes can be found in the Github repository.

package at.stefanproell.model;

import java.util.Date;
import java.util.List;

import javax.persistence.Column;
import javax.persistence.Entity;
import javax.persistence.EnumType;
import javax.persistence.Enumerated;
import javax.persistence.Id;
import javax.persistence.OneToMany;

@Entity(name="employees")
public class Employee {
&nbsp; &nbsp;@Id
&nbsp; &nbsp;@Column(name="emp_no")
&nbsp; &nbsp;private int employee_id;
&nbsp; &nbsp;
&nbsp; &nbsp;@Column(name="birth_date")
&nbsp; &nbsp;private Date birthdate;
&nbsp; &nbsp;
&nbsp; &nbsp;@Column(name="first_name")
&nbsp; &nbsp;private String firstname;
&nbsp; &nbsp;
&nbsp; &nbsp;@Column(name="last_name")
&nbsp; &nbsp;private String lastname;
&nbsp; &nbsp;
&nbsp; &nbsp;@Column(name="gender",columnDefinition = "ENUM('M', 'F', 'UNKNOWN') DEFAULT 'UNKNOWN'")
&nbsp; &nbsp;@Enumerated(EnumType.STRING)
&nbsp; &nbsp;private Gender gender;
&nbsp; &nbsp;
&nbsp; &nbsp;@Column(name="hire_date")
&nbsp; &nbsp;private Date hireDate;

&nbsp; &nbsp;@OneToMany(mappedBy="employee")
&nbsp; &nbsp;List<Title> titles; 
&nbsp; &nbsp;
&nbsp; &nbsp;@OneToMany(mappedBy="employee")
&nbsp; &nbsp;List<Department_Employee> employee_department;

&nbsp; &nbsp;// Setters and getters
&nbsp; &nbsp;
&nbsp; &nbsp;
}```


Now that we have prepared the data model, our schema is now fixed and does not change any more. We can deactivate the Hibernate based dynamic generation of the database tables and use the Spring database initialization instead.To see if we modelled the data correctly, we import MySQL employee data dump we obtained before and import it into our newly created schema, which maps the Java POJOs.

## Importing the Initial Data

In the next step, we import the data from the MySQL employee database into our schema spring_hibernate. This schema contains the tables that Hibernate created for us. The following script copies the data between the two schemata. If you see an error, then there is an issue with your model.

<pre class="theme:github lang:mysql decode:true">-- The original data is stored in the database called employees
-- Spring created the new schema called spring_employees
USE `spring_employees`;

-- Departments

INSERT INTO `spring_employees`.`departments`
(`dept_no`,
`dept_name`)
SELECT `departments`.`dept_no`,
    `departments`.`dept_name`
FROM `employees`.`departments`;

-- Employees

INSERT INTO `spring_employees`.`employees`
(`emp_no`,
`birth_date`,
`first_name`,
`gender`,
`hire_date`,
`last_name`)
SELECT `employees`.`emp_no`,
    `employees`.`birth_date`,
    `employees`.`first_name`,
    `employees`.`gender`,
    `employees`.`hire_date`,
    `employees`.`last_name`
FROM `employees`.`employees`;

-- Join table 
INSERT INTO `spring_employees`.`dept_emp`
(`emp_no`,
`dept_no`,
`from_date`,
`to_date`)
SELECT 
`dept_emp`.`emp_no`,
    `dept_emp`.`dept_no`,
    `dept_emp`.`from_date`,
    `dept_emp`.`to_date`
FROM `employees`.`dept_emp`;

-- Join table 

INSERT INTO `spring_employees`.`dept_manager`
(
`emp_no`,
`dept_no`,
`from_date`,
`to_date`)
SELECT `dept_manager`.`emp_no`,
    `dept_manager`.`dept_no`,
    `dept_manager`.`from_date`,
    `dept_manager`.`to_date`
FROM `employees`.`dept_manager`;

-- Titles

INSERT INTO `spring_employees`.`titles`
(`emp_no`,
`title`,
`from_date`,
`to_date`)
SELECT `titles`.`emp_no`,
    `titles`.`title`,
    `titles`.`from_date`,
    `titles`.`to_date`
FROM `employees`.`titles`;

-- Salaries

INSERT INTO `spring_employees`.`salaries`
(`emp_no`,
`salary`,
`from_date`,
`to_date`)
SELECT `salaries`.`emp_no`,
    `salaries`.`salary`,
    `salaries`.`from_date`,
    `salaries`.`to_date`
FROM `employees`.`salaries`;```


We now imported the data in the database schema that we defined for our project. Spring can load schema and initial data during start-up. So we provide two files, one containing the schema and the other one containing the data. To do that, we create two dumps of the database. One containing the schema only, the other one containing the data only.

mysqldump -u dev -psEcReT –no-data –databases spring_employees > src/main/resources/schema.sql mysqldump -u dev -psEcReT –no-create-info –databases employees > src/main/resources/data.sql```

By deactivating the Hibernate data generation and activating the Spring way, the database gets initialized every time the application starts. Change and edit the following lines in the application.properties

spring.jpa.hibernate.ddl-auto=none
spring.datasource.initialize=true
spring.datasource.schema=classpath:/schema.sql
spring.datasource.data=classpath:/data.sql```


Before we can import the data with the scripts, make sure to drop the schema and disable foreign key checks in the schema file and enable them again at the end. Spring ignores the actionable MySQL comments. So your schema file should contain this

<pre class="theme:github lang:default decode:true ">DROP DATABASE IF EXISTS `spring_employees`;
SET foreign_key_checks = 0;

// rest of the code

SET foreign_key_checks = 1;```


And also insert the two foreign key statements to the data file. Note that the import can take a while. If you are happy with the initialized data, you can deactivate the initialization by setting the variable to false: <span class="lang:default decode:true crayon-inline">spring.datasource.initialize=false</span>

The application.properties file meanwhile looks like this:

<pre class="theme:github lang:default decode:true "># ===============================
# = JPA / HIBERNATE
# ===============================

# Specify the DBMS
spring.jpa.database = MYSQL

# Show or not log for each sql query
spring.jpa.show-sql = true
spring.datasource.url=jdbc:mysql://127.0.0.1/spring_employees?createDatabaseIfNotExist=true
spring.datasource.username=dev
spring.datasource.password=sEcReT
spring.datasource.driver-class-name=com.mysql.jdbc.Driver
spring.datasource.connectionProperties=useUnicode=true;characterEncoding=utf-8;
spring.jpa.properties.hibernate.dialect = org.hibernate.dialect.MySQL5Dialect
# Do not initialize anything
spring.jpa.hibernate.ddl-auto=none
spring.datasource.initialize=false
spring.datasource.schema=classpath:/schema.sql
spring.datasource.data=classpath:/data.sql
spring.datasource.platform=mysql```


# Adding Hibernate Search

Hibernate search offers full-text search capabilities by using a dedicated index. We need to add the dependencies to the build file.

<pre class="theme:github lang:default decode:true ">dependencies {
	compile('org.springframework.boot:spring-boot-starter-data-jpa')
	compile('org.springframework.boot:spring-boot-starter-web')
	testCompile('org.springframework.boot:spring-boot-starter-test')
	compile("mysql:mysql-connector-java")
	compile('org.apache.commons:commons-lang3:3.5')
	compile("org.hibernate:hibernate-search-orm:5.5.6.Final")
	compile('org.springframework.boot:spring-boot-starter-test')
	compile('org.springframework.boot:spring-boot-starter-logging')
	compile('org.springframework.boot:spring-boot-starter-freemarker')
}```


Refresh the gradle file after including the search dependencies.

## Adding Hibernate Search Dependencies

In this step, we annotate the model POJO classes and introduce the full-text search index. Hibernate search utilises just a few basic settings to get started. Add the following variables to tne application properties file.

===============================

= HIBERNATE SEARCH

===============================

Spring Data JPA will take any properties under spring.jpa.properties.* and

pass them along (with the prefix stripped) once the EntityManagerFactory is

created.

Specify the DirectoryProvider to use (the Lucene Directory)

spring.jpa.properties.hibernate.search.default.directory_provider = filesystem

Using the filesystem DirectoryProvider you also have to specify the default

base directory for all indexes (make sure that the application have write

permissions on such directory)

spring.jpa.properties.hibernate.search.default.indexBase = /tmp/SearchRroo/```

Please not that storing the Lucene index in the tmp directory is not the best idea, but for testing we can use this rather futile location. We also use the filesystem to store the index, as this is the simplest approach.

Create a Service

In order to facilitate Hibernate Search on our data, we add a service class, which offers methods for searching. The service uses a configuration, which is injected by Spring during run time. The configuration is very simple.

package at.stefanproell.service;

import javax.persistence.EntityManager;

import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;



@Configuration
public class HibernateSearchConfiguration {
	private final Logger logger = LoggerFactory.getLogger(HibernateSearchConfiguration.class);

	@Autowired
	private EntityManager entityManager;

	@Bean
	HibernateSearchService hibernateSearchService() {
		HibernateSearchService hibernateSearchService = new HibernateSearchService(entityManager);
		hibernateSearchService.initializeHibernateSearch();
		return hibernateSearchService;
	}
}```


The @Configuration is loaded when Spring builds the application context. It provides a bean of our service, which can then be injected into the application. The service itself provides methods for creating and searching the index. In this example, the search method is very simple: it only searches on the first and the last name of an employee and it allows users to make one mistake (distance 1).

<pre class="theme:github lang:default decode:true">package at.stefanproell.service;

import java.util.List;

import javax.persistence.EntityManager;
import javax.persistence.NoResultException;
import javax.persistence.PersistenceContext;

import org.apache.lucene.search.Query;
import org.hibernate.search.jpa.FullTextEntityManager;
import org.hibernate.search.jpa.Search;
import org.hibernate.search.query.dsl.QueryBuilder;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Service;
import org.springframework.transaction.annotation.Transactional;


import at.stefanproell.model.Employee;
import ch.qos.logback.classic.Logger;

@Service
public class HibernateSearchService {

&nbsp; &nbsp;private final Logger logger = (Logger) LoggerFactory.getLogger(HibernateSearchService.class);

&nbsp; &nbsp;private final EntityManager entityManager;

&nbsp; &nbsp;@Autowired
&nbsp; &nbsp;public HibernateSearchService(EntityManager entityManager) {
&nbsp; &nbsp;&nbsp;&nbsp; &nbsp;super();
&nbsp; &nbsp;&nbsp;&nbsp; &nbsp;this.entityManager = entityManager;
&nbsp; &nbsp;}


&nbsp; &nbsp;public void initializeHibernateSearch() {

&nbsp; &nbsp;&nbsp;&nbsp; &nbsp;try {
&nbsp; &nbsp;&nbsp;&nbsp; &nbsp;&nbsp;&nbsp; &nbsp;FullTextEntityManager fullTextEntityManager = Search.getFullTextEntityManager(entityManager);
&nbsp; &nbsp;&nbsp;&nbsp; &nbsp;&nbsp;&nbsp; &nbsp;fullTextEntityManager.createIndexer().startAndWait();
&nbsp; &nbsp;&nbsp;&nbsp; &nbsp;} catch (InterruptedException e) {
&nbsp; &nbsp;&nbsp;&nbsp; &nbsp;&nbsp;&nbsp; &nbsp;// TODO Auto-generated catch block
&nbsp; &nbsp;&nbsp;&nbsp; &nbsp;&nbsp;&nbsp; &nbsp;e.printStackTrace();
&nbsp; &nbsp;&nbsp;&nbsp; &nbsp;}
&nbsp; &nbsp;}
&nbsp; &nbsp;
&nbsp; &nbsp;@Transactional
&nbsp; &nbsp;public List<Employee> fuzzySearch(String searchTerm){

&nbsp; &nbsp;&nbsp;&nbsp; &nbsp;FullTextEntityManager fullTextEntityManager = Search.getFullTextEntityManager(entityManager);
&nbsp; &nbsp;&nbsp;&nbsp; &nbsp;QueryBuilder qb = fullTextEntityManager.getSearchFactory().buildQueryBuilder().forEntity(Employee.class).get();
&nbsp; &nbsp;&nbsp;&nbsp; &nbsp;Query luceneQuery = qb.keyword().fuzzy().withEditDistanceUpTo(1).withPrefixLength(1).onFields("firstname", "lastname")
&nbsp; &nbsp;&nbsp;&nbsp; &nbsp;&nbsp;&nbsp; &nbsp;&nbsp;&nbsp; &nbsp;.matching(searchTerm).createQuery();

&nbsp; &nbsp;&nbsp;&nbsp; &nbsp;javax.persistence.Query jpaQuery = fullTextEntityManager.createFullTextQuery(luceneQuery, Employee.class);

&nbsp; &nbsp;&nbsp;&nbsp; &nbsp;// execute search

&nbsp; &nbsp;&nbsp;&nbsp; &nbsp;List<Employee> employeeList = null;
&nbsp; &nbsp;&nbsp;&nbsp; &nbsp;try {
&nbsp; &nbsp;&nbsp;&nbsp; &nbsp;&nbsp;&nbsp; &nbsp;employeeList&nbsp; = jpaQuery.getResultList();
&nbsp; &nbsp;&nbsp;&nbsp; &nbsp;} catch (NoResultException nre) {
&nbsp; &nbsp;&nbsp;&nbsp; &nbsp;&nbsp;&nbsp; &nbsp;logger.warn("No result found");

&nbsp; &nbsp;&nbsp;&nbsp; &nbsp;}

&nbsp; &nbsp;&nbsp;&nbsp; &nbsp;return employeeList;

&nbsp; &nbsp;
&nbsp; &nbsp;}
}

The service implementation currently only contains an initialization method, which used for creating the Lucene index on the filesystem. Before we can test the index, we need to have at least one indexed entity. This can be achieved by simply adding the annotation @Indexed to the POJO.

Hibernate Search and Spring Boot: Building Bridges

Hibernate Search is a very convenient way for storing database content in a Lucine index and add fulltext search capabilities to data driven projects simply by annotating classes. It can be easily integrated into Spring Boot applications and as long as only the basic features are used, it works out of the box. The fun starts when the Autoconfiguration cannot find out how to properly configure things automatically, then it gets tricky quite quickly. Of course this is natural behaviour, but one gets spoiled quite quickly. 

Using the latest Features: Hibernate ORM, Hibernate Search and Spring Boot

The current version of Spring Boot is 1.5.2. This version uses Hibernate ORM 5.0. The latest stable Hibernate Search versions are 5.6.1.Final and 5.7.0.Final, which in in contrast require Hibernate ORM 5.1 and 5.2 respectively. Also you need Java 8 now. For this reason if you need the latest Spring Search features in combination with Spring Boot, you need to adapt the dependencies as follows:

buildscript {
	ext {
		springBootVersion = '1.5.1.RELEASE'
	}
	repositories {
		mavenCentral()
	}
	dependencies {
		classpath("org.springframework.boot:spring-boot-gradle-plugin:${springBootVersion}")
	}
}

apply plugin: 'java'
apply plugin: 'eclipse'
apply plugin: 'org.springframework.boot'

jar {
	baseName = 'SearchaRoo'
	version = '0.0.1-SNAPSHOT'
}

sourceCompatibility = 1.8
targetCompatibility = 1.8

repositories {
	mavenCentral()
}


dependencies {
	compile('org.springframework.boot:spring-boot-starter-data-jpa')
	compile('org.springframework.boot:spring-boot-starter-web')
	testCompile('org.springframework.boot:spring-boot-starter-test')
	compile("mysql:mysql-connector-java")
	compile('org.apache.commons:commons-lang3:3.5')
	compile('org.springframework.boot:spring-boot-starter-test')
	compile('org.springframework.boot:spring-boot-starter-logging')
	compile('org.springframework.boot:spring-boot-starter-freemarker')

	// Hibernate Search
    compile("org.hibernate:hibernate-core:5.2.8.Final")
    compile("org.hibernate:hibernate-search-orm:5.7.0.Final")    
            configurations.all 
    {
    	exclude group: "org.hibernate:", module: "hibernate-entitymanager"
	}
}```


Note that the Hibernate Entity Manager needs to be excluded, because it has been integrated into the core in the new Hibernate version. Details are given in the [Spring Boot documentation][1].

## Enforcing the Dependencies to be Loaded in the Correct Sequence

As written earlier, Spring Boot takes care of a lot of configurations for us. Most of the time, this works perfectly and reduces the pain for configuring a new application manually. In some particular cases, Spring cannot figure out that there exists a dependency between different services, which needs to be resolved in a specified order. A typical use case is the implementation of FieldBridges for Hibernate Search. FieldBrides translate between the actual Object from the Java World and the representation of such an object in the Lucene index. Typically an [EnumBridge][2]is used for indexing Enums, which are often used for realizing internationalization (I18n).

When the Lucene Index is created, Hibernate checks if Enum fields need to be indexed and if there exist Bridge that converts between the object and the actual record in the Index. The problem here is that Hibernate JPA is loaded at a very early stage in the Spring Boot startup proces. The problem only arises if the BridgeClass utilises @Autowired&nbsp;fields which get injected. Typically, these fields would get injected when the&nbsp;AnnotationBeanConfigurerAspect bean is loaded.&nbsp;Hibernate creates the session with the session factory auto configuration before the&nbsp;spring configurer aspect bean was loaded. So the FieldBridge used by Hibernate during the initialization of the index does not have the service injected yet, causing a nasty Null Pointer Exception.&nbsp;

### Example EnumBridge

The following EnumBridge example utilises an injected Service, which needs to be available before Hibernate starts. If not taken care of, this causes a Null Pointer Exception.

@Configurable public class HibernateSearchEnumBridgeExample extends EnumBridge { private I18nMessageService i18nMessageService;

@Autowired
@Required
public void setI18nMessageService(I18nMessageService service) {
this.i18nMessageService = service;
}

@Override
public String objectToString(Object object)
{
     return  i18nMessageService.getMessageForEnum(object);
}


@Override
public Enum<? extends Enum> stringToObject(String name)
{
    return Enum.valueOf(name);

}

@Override
public void setAppliedOnType(Class<?> returnType)
{
    super.setAppliedOnType(returnType);
}

}```

Enforce Loading the Aspect Configurer Before the Session Factory

In order to enforce that theAnnotationBeanConfigurerAspect is created before the Hibernate Session Factory is created, we simply implement our own HibernateJpaAutoConfiguration by extension and add the AnnotationBeanConfigurerAspect to the constructor. Spring Boot now knows that it needs to instantiate the AnnotationBeanConfigurerAspect before it can instantiate the HibernateJpaAutoConfiguration and we then have wired Beans ready for the consumption of the bridge. I found the correct hint [here][3] and [here][4].

@Configuration
public class HibernateSearchConfig extends HibernateJpaAutoConfiguration {

	public HibernateSearchConfig(DataSource dataSource, JpaProperties jpaProperties,
				AnnotationBeanConfigurerAspect beanConfigurerAspect,
				ObjectProvider<JtaTransactionManager> jtaTransactionManager,
				ObjectProvider<TransactionManagerCustomizers> transactionManagerCustomizers) {

			super(dataSource, jpaProperties, jtaTransactionManager, transactionManagerCustomizers);
	}
}```


As it has turned out, using @DependsOn annotations did not work and also @Ordering the precedence of the Beans was not suffucient. With this little hack, we can ensure the correct sequence of initialization.

<div class="twttr_buttons">
  <div class="twttr_twitter">
    <a href="http://twitter.com/share?text=Hibernate+Search+and+Spring+Boot%3A+Building+Bridges" class="twitter-share-button" data-via="" data-hashtags=""  data-size="default" data-url="https://blog.stefanproell.at/2017/03/10/hibernate-search-and-spring-boot/"  data-related="" target="_blank">Tweet</a>
  </div>
  
  <div class="twttr_followme">
    <a href="https://twitter.com/@stefanproell" class="twitter-follow-button" data-show-count="true" data-size="default"  data-show-screen-name="false"  target="_blank">Follow me</a>
  </div>
</div>

 [1]: http://docs.spring.io/spring-boot/docs/1.5.2.RELEASE/reference/htmlsingle/
 [2]: https://docs.jboss.org/hibernate/search/5.7/api/
 [3]: http://blog.piotrturski.net/2014/05/spring-boot-dependson-is-not-enough.html
 [4]: https://hibernate.atlassian.net/browse/HSEARCH-1316

Deploying MySQL in a Local Development Environment

Installing MySQL via apt-get is a simple task, but the migration between different MySQL versions requires planning and testing. Thus installing one central instance of the database system might not be suitable, when the version of MySQL or project specific settings should be switched quickly without interfering with other applications. Using one central instance can quickly become cumbersome. In this article, I will describe how any number of MySQL instances can be stored and executed from within a user’s home directory.

Adapting MySQL Data an Log File Locations

Some scenarios might require to run several MySQL instances at once, other scenarios cover sensitive data, where we do not want MySQL to write any data on non-encrypted partitions. This is especially true for devices which can get easily stolen, for instance laptops. If you use a laptop for developing your applications from time to time, chances are good that you need to store sensitive data in a database, but need to make sure that the data is encrypted when at rest. The data stored in a database needs to be protected when at rest.

This can be solved with full disk encryption, but this technique has several disadvantages. First of all, full disk encryption only utilises one password. This entails, that several users who utilise a device need to share one password, which reduces the reliability of this approach. Also when the system needs to be rebooted, full disk encryption can become an obstacle, which increases the complexity further.

Way easier to use is the transparent home directory encryption, which can be selected during many modern Linux setup procedures out of the box. We will use this encryption type for this article, as it is reasonable secure and easy to setup. Our goal is to store all MySQL related data in the home directory and run MySQL with normal user privileges.

Creating the Directory Structure

The first step is creating a directory structure for storing the data. In this example, the user name is stefan, please adapt to your needs.

Flashing a NanoPc T3 with DietPi

The NanoPc T3 is a 64 bit octa core single board computer, quite similar to the famous Raspberry Pi boards. It is also often referred to as NanoPi T3 as well.

Hardware Specification

The single board computer has eight cores with up to 1.4GhZ and 1 GB of DDR3 RAM. It has a lot of nice interfaces, the specification below is taken from [here][1].

SoC: Samsung S5P6818 Octa-Core Cortex-A53, 400M Hz - 1.4G Hz
    Power Management Unit: AXP228 PMU, it supports software power-off and wake-up.
    System Memory: 1GB/2GB 32bit DDR3 RAM
    Storage: 1 x SD Card Socket
    Ethernet: Gbit Ethernet(RTL8211E)
    WiFi: 802.11b/g/n
    Bluetooth: 4.0 dual mode
    Antenna: Porcelain Antenna IPX Interface
    eMMC: 8GB
    Video Input: DVP Camera/MIPI-CSI (two camera interfaces)
    Video Output: HDMI Type-A / LVDS / Parallel RGB-LCD / MIPI-DSI (four video output interfaces)
    Audio: 3.5 mm audio jack / via HDMI
    Microphone: onboard Microphone
    USB Host: 4 x USB 2.0 Host, two type A ports and two 2.54 mm pitch pin-headers
    MicroUSB: 1 x MicroUSB 2.0 Client, Type A
    LCD Interface: 0.5mm pitch 45 pin FPC seat, full color RGB 8-8-8
    HDMI: 1.4A Type A, 1080P
    DVP Camera: 0.5mm pitch 24 pin FPC seat
    GPIO: 2.54 mm pitch 30 pin-header
    Serial Debug Port: 2.54mm pitch 4-pin-header
    User Key: K1 (power), Reset
    LED: 1 x power LED and 2 x GPIO LED
    Other Resources: CPU’s internal TMU
    RTC Battery: RTC Battery Seat
    Heat Sink: 1 x Heat Sink with mounting holes
    Power: DC 5V/2A
    PCB: Six Layer
    Dimension: 100 mm x 60 mm
    OS/Software: uboot, Android and Debian

Overview

The device offers quite a lot considering its small measurements. The picture below is an overview picture taken from [here][2].

[][3]The device with the heat sink and attached cables is shown below.

[][4]

Comparison with the Raspberry Pi Model 3B

It costs about twice as much as the Raspberry Pi 3, but comes with eight cores at 1.4GHz instead of four cores with 1.2GHz, GBit Ethernet instead of just 100 MBit and several additional interfaces. It has a dedicated power switch, supports soft poweroff and provides reset and boot buttons. It comes with an SD card slot instead of micro SD, has only two standard USB ports but also one micro USB port. This port however is not for powering the device, but only for data.

Some remarks at First

The board can get quite warm, so I would recommend buying the heat sins that fit directly on the board as well. The wifi signal is also rather weak, I would recommend investing in the external antenna if the device is in an area with low signal reception. Also it requires an external 5V power source and does not provide a micro USB port for power like similar boards use.

Buying and Additional Information

The board can be obtained for 60$ from [here][5] and there also exists a [wiki page][1] dedicated to the T3. The images are stored at a One-Click share hoster and the download is very slow. Also the files are not that well organized and can be easily confused with other platforms offered by the same company.

  • Nano PC T3 ($60)
  • Heat sink ($1.99)
  • Power supply ($20)
  • SD card (~ $10)

Additionally there is shipping ($20 to Europe) and also very likely some toll to pay.

Initial Setup

The NanoPi T3 has an internal eMMC storage with 8GB capacity. It comes pre-installed with Android, which is not really useful for my applications. Instead, there exist different ISO images wich can be obtained here. The wiki page documents how to create bootable SD cards with Windows and Linux and there are also scripts offered, which automate the process. Unfortunately, the scripts are not documented well and some of the links are already broken, which reduces the usability of the provided information. Also as the images should be downloaded from some Sharehoster, there is no way of verifying, what kind of image you actually obtained. This is a security risk and not applicable in many scenarios. Fortunately, there also exist alternative images which are more transparent to use.

By default, the device boots from the eMMC flash storage. By pressing the boot button in the lower right corner, we can also boot from the SD card. This is a nice feature, but if you want to reboot the system unattended, then we need to replace the default operating system. In the course of this article, we are going to write an alternative Debian image to the flash memory and boot this OS automatically.

DietPi

[DietPi][6] is a Debian based distribution, which claims to be an optimized and lightweight alternative for single board PCs. The number of supported devices is impressive and luckily, also the NanoPC T3 is in the list. It also comes with a list of nice features for the configuration and the backup of the system. DietPi can be dowloaded [here][7] and the documentation is available [here][8].

The following steps are requried:

  1. Download the DietPi Image
  2. Write the image to the SD card
  3. Mount the SD card on your desktop and copy the DietPi image to the card
  4. Boot the NanoPC T3 from the card
  5. Flash the DietPi image to the eMMC
  6. Reboot
  7. Configure

Creating a Bootable SD Card

The fist step involves creating a bootable SD card by writing the DietPi image with dd to the card. To do so, download the DietPi image to your local Desktop and then write the file with dd. The process does not differ from other single board machines and is described [here][9]. The next step might seem a bit odd. After you finished writing the SD card, mount it on your local Dekstop and copy the DietPi image to the tmp directory of the SD card.The reason we do this is that we need to have a running Linux system so that we can flash the integrated eMMC of the T3. We then use the DietPi Linux zu actually flash the eMMC of the T3 also with the DietPi image. By copying the image we save some time for downloading and we have the image right available in the next step.

Boot the SD Card

Make sure the T3 is powered off and insert the SD card into the board. Hold and keep pressed the boot button and flip the power switch. The T3 then should boot into the DietPi system. It is easier if you attach a monitor and a keyboard to the system for the further configuration. Alternatively, you can also configure the networking settings in advance, by mounting the SD card at the Desktop and edit the configuration files there, but as we simply use this system for installing the actual operating system, this might be a bit too much effort. Press CRTL+ALT+F2 to switch to a new TTY and login. The standard login for the DietPi system is with the user root and password dietpi.

First, create a backup of the original eMMC content, just in case anything does wrong. Use fdisk, to see the available drives.

Getting Familiar with Eclipse Again: Git Integration in Comparison with IntelliJ IDEA

Eclipse and IntelliJ are both great Java IDEs, which have their own communities, advantages and disadvantages. After having spent a few years in JetBrains IntelliJ Community Edition, I got accustomed to the tight and clean Git integration into the user interface. Now I consider switching back up Eclipse, I stumbled over a few things that I try to describe in this post.

IntelliJ and Eclipse Handle Project Structures Differently

Eclipse utilises a workspace concept, which allows to work on several projects at the same time. IntelliJ in contrast allows only one open project and organizes substructures in modules. A comparison of these concepts can be found here. These two different viewpoints have effects on the way how Git is integrated into the workflow.

Sharing Projects

The different views on project structures of both IDEs imply that Git repositories are also treated differently. While IntelliJ utilises the root of a repository directly, Eclipse introduces a subfolder for the project. This leads to the interesting observation that importing a project from Git again into an Eclipse workspace requires a small adaption in order to let Eclipse recognize the structure again.

A Small Workflow

In order to get familiar again with Eclipse, I created a small test project, which I then shared by pushing it to a Git repository. I then deleted the project and tried to re-import it again.

Step 1

Create new test project. In this case a Spring Boot Project, which works with Maven. Note that the new project is stored in the Eclipse workspace.

Step 2

As the second step, we create a new repository. Login into Github or your Gitlab instance and create a new project. Initialize it so that you have a master branch ready and copy the URL of the repository. We will then add this repository by opening the Git Repository perspective in Eclipse and add the repository. You can provide a default location for your local repositories in the Eclipse -> Team -> Git properties. In the Git Repository perspective, you can then see the path of the local storage location and some information about the repository, for instance that the local and the master branch are identical (they have the same commit hash). Note that the Git path is different than your workspace project path.

Step 3

We now have a fresh Java Maven based project in our Eclipse workspace and an empty Git repository in the default location for Git repositories in a different location. What we need to do next is to share the source code, by moving it into the Git storage location and add it to the Git index. Eclipse can help us with that,, by using the Team->Share menu from the Project Explorer view, when right clicking on the project.

Step 4

In the next step, we need to specify the Git repository we want to push our code to. This is an easy step, simply select the repository we just created from the drop down menu. In the menu you can see that the content of the current project location on the left side will be moved to the right side. Eclipse created a new subfolder within the repository for our project. This is something that IntelliJ would not do.

In this step, eclipse separates the local and custom project metadata from the actual source code, which is a good thing.

Step 5

In the fifth step, we simply apply some changes and commit and push them to the remote repository using the git staging window.

After this step, the changes are visible in the remote repository as well and available to collaborators. In order to simulate someone, who is going to checkout our little project from Gitlab, we delete the project locally and also remove the git repository.

Step 6

Now we start off with a clean workspace and clone the project from Gitlab. In the git repositories window, we can select clone project and provide the same URL again.

Step 7

In the next screen, we select the local destination for the cloned project. This could be for instance the default directory for Git projects or any other location on your disk. Confirm this dialogue.

Step 8

Now comes the tricky part, which did not work as expected in Eclipse Neon 4.6.1. Usually, one would tell Eclipse, that the cloned project is a Maven project, and it should detect the pom.xml file and download the dependencies. Todo so, we would select Import-> Git -> Projects from Git and clone the repository from this dialogue. Then, as a next step, we would select the Configure -> Convert to Maven Project option, but Eclipse does not seem to recognize the Maven structure. It would only show the files and directories, but not consider the Maven dependencies specified in the pom.xml file.

What happens is that Eclipse tries to add a new pom.xml file and ignores the actual one.

Of course this is a problem and does not work.

Step 9 – Solution

Instead of using the method above, just clone the repository from the Git Repository perspective and then go back to the Project Explorer. Now instead of importing the project via the Git menu, chose the existing Maven project and select the path of the Git repository we cloned before.

And in the next dialogue, specify the path:

As you can see, now Eclipse found the correct pom.xml file and provides the correct dependencies and structure!

Conclusion

Which IDE you prefer is a matter of taste and habit. Both environments do provide a lot of features for developers and differ in the implementation of these features. With this short article, we tried to understand some basic implications of the two philosophies how Eclipse and IntelliJ handle project structures. Once we anticipate these differences, it becomes easy to work with both of them.