Sollsteinhaus

Das Sollsteinhaus befindet sich oberhalb von Hochzirl und lässt sich öffentlich sehr gut erreichen. Mit der S5 geht es vom Haupt- oder Westbahnhof in einer guten Viertelstunde bis zum Bahnhof Hochzirl. Von Innsbruck kommend kann man direkt durch das am (Zug-) Ende des nördlichen (bergseitigen) Bahnsteig gelegene Gatter gehen und man befindet sich bereits am gut ausgeschilderten Wanderweg 213 zum Sollsteinhaus. Zu Beginn geht es gemütlich durch den Wald und nach wenigen Minuten erreicht man eine Forststraße. Diese gestaltet sich etwas steil und kräftezehrend. Dieser Forststraße folgt man etwa 1.5 Stunden entland der sehr guten Beschilderung, bis man die Materialseilbahn des Sollsteinhauses erreicht. Nach einer weiteren halben Stunde erreicht man die private Solnalm, die man nicht fälschlicherweise schon für das ziel halten sollte. Weiter geht es wieder eintlang schmaler Wege, durch ein Bachbett und in wenigen Serpentinen hinauf bis zum Sollsteinhaus auf 1805m Seehöhe. Leider wird das Sollsteinhaus seit 25.09.2016 (Stand 27.09.2016) renoviert und ist daher geschlossen. Die Wanderung ist auch beim Portal Almenrausch gut beschrieben. Die Gesamtwanderzeit betrug bis zum Sollsteinhaus etwa 2.5 Stunden.

Create a Category Page for one Specific Category and Exclude this Category from the Main Page in WordPress

This blog serves as my digital notebook for more than eight years and I use to to collect all sorts of things, that I think are worth storing and sharing. Mainly, I blog about tiny technical bits, but recently I also started to write about my life here in Innsbruck, where I try to discover what this small city and its surroundings has to offer. The technical articles are written in English, as naturally the majority of visitors understands this language. The local posts are in German for the same reason. My intention was to separate this two topics in the blog and lot let the posts create any clutter between languages.

Child Themes

When tinkering with the code of your WordPress blog, it is strongly recommended to deploy and use a child theme. This allows to reverse changes easily and more importantly, allows to update the theme without having to re-implement your adaptions after each update. Creating a child theme is very easy and described here. In addition I would recommend using some sort of code versioning tool, such as Git.

Excluding a Category from the Main Page

WordPress offers user defined categories out of the box and category pages for each category. This model does not fit well for my blog, where I have static pages and a time series of blog posts on the main page. In order to prevent that the posts about Innsbruck show up at the main page, the category ‘Innsbruck’ needs to be excluded. We can create or modify the file functions.php in the child theme folder and add the following code.

<?php
add_action( 'wp_enqueue_scripts', 'theme_enqueue_styles' );
function theme_enqueue_styles() {
        wp_enqueue_style( 'parent-style', get_template_directory_uri() . '/style.css' );

}

// Exclude Innsbruck Category from Main Page
function exclude_category($query) {
     if ( $query->is_home() ) {
         // Get the category ID of the category Insbruck
         $innsbruckCategory = get_cat_ID( 'Innsbruck' );
         // Add a minus in front of the string
         $query->set('cat','-' . $innsbruckCategory);

      }
     return $query;
}
add_filter('pre_get_posts', 'exclude_category');

?>

This adds a filter which gets executed before the posts are collected. We omit all posts of the category Innsbruck, by adding a minus as prefix of the category ID. Of course you could also lookup the category ID in the administration dashboard, by hovering with your mouse over the category name and save one database query.

A Custom Page Specific for one Category

In the second step, create a new page in the dashboard. This page will contain all posts of the Innsbruck category that we will publish. Create the file page.php in your child theme folder and use the following code:

<?php
/**
 * The template for displaying all pages.
 *
 * This is the template that displays all pages by default.
 * Please note that this is the WordPress construct of pages
 * and that other 'pages' on your WordPress site will use a
 * different template.
 *
 * @package dazzling
 */

    get_header();
?>
    <div id="primary" class="content-area col-sm-12 col-md-8">
        <main id="main" class="site-main" role="main">

<?php
     // Specify the arguments for the post query
     $args = array(
        'cat' => '91', // Innsbruck category id
        'post_type' => 'post',
        'posts_per_page' => 5,
        'paged' => ( get_query_var('paged') ? get_query_var('paged') : 1),
    );

    if( is_page( 'innsbruck' )) {
        query_posts($args);
    }
?>

<?php while ( have_posts() ) : the_post(); ?>
    <?php get_template_part( 'content', 'post' ); ?>
    <?php
        // If comments are open or we have at least one comment, load up the comment template
        if ( comments_open() || '0' != get_comments_number() ) :
            comments_template();
        endif;
    ?>
<?php endwhile; // end of the loop. ?>


<div class="navigation">
    <div class="alignleft">&lt;?php next_posts_link('&laquo; Ältere Beiträge') ?&gt;&lt;/div&gt;
    <div class="alignright">&lt;?php previous_posts_link('Neuere Beiträge &raquo;') ?&gt;&lt;/div&gt;
</div>


    </main>&lt;!-- #main --&gt;
</div>&lt;!-- #primary --&gt;
<?php get_sidebar(); ?>
<?php get_footer(); ?>

In this code snippet, we define a set of arguments, which are used for filtering the posts of the desired category. In this example, I used the id of the Innsbruck category (91) directly. We define that we want to display posts only, 5 per page. An important aspect is the pagination. When we only display posts of one category, we need to make sure that Worpress counts the pages correctly. Otherwise the page would always display the same posts, regardless how often the user clicks on the next page button. The reason is that this button uses the global paged variable, which is set correctly in the example above.

The if conditional makes sure that only the pages from the Innsbruck category are displayed. The while loop then iterates over all posts and displays them. At the bottom we can see the navigational buttons for older and newer posts of the Innsbruck category.

Persistent Data in a MySQL Docker Container

Running MySQL in Docker

In a recent article on Docker in this blog, we presented some basics for dealing with data in containers. This article will present another popular application for Docker: MySQL containers. Running MySQL instances in Docker allows isolating database infrastructure with ease.

Connecting to the Standard MySQL Container

The description of the MySQL docker image provides a lot of useful information how to launch and connect to a MySQL container. The first step is to create standard MySQL container from the latest available image.

sudo docker run \
   --name=mysql-instance 
   -e MYSQL_ROOT_PASSWORD=secret 
   -p 3307:3306 
   -d 
   mysql:latest

This creates a MySQL container where the root password is set to secret. As the host is already running its own MySQL instance (which has nothing to do with this docker example), the standard port 3306 is already taken. Thus we publish utilise the port 3307 on the host system and forward it to the 3306 standard port from the container.

Connect from the Host

We can then connect from the command line like this:

mysql -uroot -psecret -h 127.0.0.1 -P3307

We could also provide the hostname localhost for connecting to the container, but as the MySQL client per default assumes that a localhost connection is via a socket, this would not work. Thus when using the hostname localhost, we needed to specify the protocol TCP, wo that the client connects via the network interface.

mysql -uroot -psecret -h localhost --protocol TCP -P3307

Connect from other Containers

Connecting from a different container to the MySQL container is pretty straight forward. Docker allows to link two containers and then use the exposed ports between them. The following command creates a new ubuntu container and links to the MySQL container.

sudo docker run -it --name ubuntu-container --link mysql-instance:mysql-link ubuntu:16.10 bash

After this command, you are in the terminal of the Ubuntu container. We then need to install the MySQL client for testing:

# Fetch the package list
root@7a44b3e7b088:/# apt-get update
# Install the client
root@7a44b3e7b088:/# apt-get install mysql-client
# Show environment variables
root@7a44b3e7b088:/# env

The last command gives you a list of environment variables, among which is the IP address and port of the MySQL container.

MYSQL_LINK_NAME=/ubuntu-container/mysql-link
HOSTNAME=7a44b3e7b088
TERM=xterm
MYSQL_LINK_ENV_MYSQL_VERSION=5.7.14-1debian8
MYSQL_LINK_PORT=tcp://172.17.0.2:3306
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
MYSQL_LINK_PORT_3306_TCP_ADDR=172.17.0.2
MYSQL_LINK_PORT_3306_TCP=tcp://172.17.0.2:3306
PWD=/
MYSQL_LINK_PORT_3306_TCP_PORT=3306
SHLVL=1
HOME=/root
MYSQL_LINK_ENV_MYSQL_MAJOR=5.7
MYSQL_LINK_PORT_3306_TCP_PROTO=tcp
MYSQL_LINK_ENV_GOSU_VERSION=1.7
MYSQL_LINK_ENV_MYSQL_ROOT_PASSWORD=secret
_=/usr/bin/env

You can then connect either manually of by providing the variables

mysql -uroot -psecret -h 172.17.0.2
mysql -uroot -p$MYSQL_LINK_ENV_MYSQL_ROOT_PASSWORD -h $MYSQL_LINK_PORT_3306_TCP_ADDR -P $MYSQL_LINK_PORT_3306_TCP_PORT

If you only require a MySQL client inside a container, simply use the MySQL image from docker. Batteries included!

Persistent Docker Containers

Docker Fundamentals

Docker has become a very popular tool for orchestrating services. Docker it much more lightweight than virtual machines. For instance do containers not require a boot process. Docker follows the philosophy that one container serves only one process. So in contrast to virtual machines which often bundle several services together, Docker is built for running single services per container. If you come from the world of virtualised machines, Docker can be a bit confusing in the beginning, because it uses its own terminology. A good point to start is as always the documentation and there are plenty of great tutorials out there.

Images and Containers

Docker images serves as templates for the containers. As images and containers both have hexadecimal ids they are very easy to confuse. The following example shows step by step how to create a new container based on the Debian image and how to open shell access.

# Create a new docker container based on the debian image
sudo docker create -t --name debian-test debian:stable 
# Start the container
sudo docker start  debian-test
# Check if the container is running
sudo docker ps -a
# Execute bash to get an interactive shell
sudo docker exec -i -t debian-test bash

A shorter variant of creating and launching a new container is listed below. The run command creates a new container and starts it automatically. Be aware that this creates a new container every time, so assigning a container name helps with not confusing the image with the container. The command run is in particular tricky, as you would expect it to run (i.e. launch) a container only. In fact, it creates a new one and starts it.

sudo docker run -it --name debian-test debian:stable bash

Important Commands

The following listing shows the most important commands:

# Show container status
sudo docker ps -a
# List available images
sudo docker images 
# Start or stop a container
sudo docker start CONTAINERNAME
sudo docker stop CONTAINERNAME
# Delete a container
sudo docker rm CONTAINERNAME

You can of course create your own images, which will not be discussed in this blog post. It is just important to know that you can’t move containers from your host so some other machine directly. You would need to commit the changes made to the image and create a new container based on that image. Please be aware that this does not include the actual data stored in that container! You need to manually export any data and files from the original container and import it in the new container again. This is another trap worth noting. You can, however,  also mount data in the image, if the data is available at the host at the time of image creation. Details on data in containers can be found here.

Persisting Data Across Containers

The way how Docker persists data needs getting used to in the beginning, especially as it is easy to confuse images with containers. Remember that Docker images serve only as the template. So when you issue the command sudo docker run …  this actually creates a container from an image first and then starts it. So whenever you issue this command again, you will end up with a new container which does share any data with the previously created container.

Docker 1.9 introduced data volume containers, which allow to create dedicated data containers which can be used from several other containers. Data volume containers can be used for persisting data. The following listing shows how to create a data volume container and mount the volume in a container.

# Create a data volume
sudo docker volume create --name data-volume-test
# List all volumes
sudo docker volume ls
# Delete the container
sudo docker rm debian-test
# Create a new container, now with the data volume 
sudo docker create -v data-volume-test:/test-data -t --name debian-test debian:stable
# Start the container
sudo docker start debian-test
# Get the shell
sudo docker exec -i -t debian-test bash

After we logged into the shell, we can see the data volume we mounted on the directory test-data:

root@d4ac8c89437f:/# ls -la
total 76
drwxr-xr-x  28 root root 4096 Aug  3 13:11 .
drwxr-xr-x  28 root root 4096 Aug  3 13:11 ..
-rwxr-xr-x   1 root root    0 Aug  3 13:10 .dockerenv
drwxr-xr-x   2 root root 4096 Jul 27 20:03 bin
drwxr-xr-x   2 root root 4096 May 30 04:18 boot
drwxr-xr-x   5 root root  380 Aug  3 13:11 dev
drwxr-xr-x  41 root root 4096 Aug  3 13:10 etc
drwxr-xr-x   2 root root 4096 May 30 04:18 home
drwxr-xr-x   9 root root 4096 Nov 27  2014 lib
drwxr-xr-x   2 root root 4096 Jul 27 20:02 lib64
drwxr-xr-x   2 root root 4096 Jul 27 20:02 media
drwxr-xr-x   2 root root 4096 Jul 27 20:02 mnt
drwxr-xr-x   2 root root 4096 Jul 27 20:02 opt
dr-xr-xr-x 267 root root    0 Aug  3 13:11 proc
drwx------   2 root root 4096 Jul 27 20:02 root
drwxr-xr-x   3 root root 4096 Jul 27 20:02 run
drwxr-xr-x   2 root root 4096 Jul 27 20:03 sbin
drwxr-xr-x   2 root root 4096 Jul 27 20:02 srv
dr-xr-xr-x  13 root root    0 Aug  3 13:11 sys
drwxr-xr-x   2 root root 4096 Aug  3 08:26 <span style="color: #0000ff;"><strong>test-data</strong></span>
drwxrwxrwt   2 root root 4096 Jul 27 20:03 tmp
drwxr-xr-x  10 root root 4096 Jul 27 20:02 usr
drwxr-xr-x  11 root root 4096 Jul 27 20:02 var```


We can navigate into that folder and create a 100 M data file with random data.

root@d4ac8c89437f:~# cd /test-data/ root@d4ac8c89437f:/test-data# dd if=/dev/urandom of=100M.dat bs=1M count=100 100+0 records in 100+0 records out 104857600 bytes (105 MB) copied, 6.69175 s, 15.7 MB/s root@d4ac8c89437f:/test-data# du -h . 101M .



When we exit the container, we can see the file in the host file system  here:

stefan@stefan-desktop:~$ sudo ls -l /var/lib/docker/volumes/data-volume-test/_data insgesamt 102400 -rw-r–r– 1 root root 104857600 Aug 3 15:17 100M.dat```

We can use this volume transparently in the container, but it is not depending on the container itself. So whenever we have to delete to container or want to use the data with a different container, this solution works perfectly. Thw following command shows how we mount the same volume in an Ubuntu container and execute the ls command to show the content of the directory.

stefan@stefan-desktop:~$ sudo docker run -it -v data-volume-test:/test-data-from-debian --name ubuntu-test ubuntu:16.10 ls -l /test-data-from-debian
total 102400
-rw-r--r-- 1 root root 104857600 Aug  3 13:17 100M.dat

You can display a lot of usefil information about a container with the inspect command. It also shows the data container and where it is mounted.

sudo docker inspect ubuntu-test

...
        "Mounts": [
            {
                "Name": "data-volume-test",
                "Source": "/var/lib/docker/volumes/data-volume-test/_data",
                "Destination": "/test-data-from-debian",
                "Driver": "local",
                "Mode": "z",
                "RW": true,
                "Propagation": "rprivate"
            }
        ],
...```


We delete the ubuntu container and create a new one. We then start the container, open a bash session and write some test data into the directory.

stefan@stefan-desktop:~$ sudo docker create -v data-volume-test:/test-data-ubuntu -t –name ubuntu-test ubuntu:16.10 f3893d368e11a32fee9b20079c64494603fc532128179f0c08d10321c8c7a166 stefan@stefan-desktop:~$ sudo docker start ubuntu-test ubuntu-test stefan@stefan-desktop:~$ sudo docker exec -it ubuntu-test bash root@f3893d368e11:/# cd /test-data-ubuntu/ root@f3893d368e11:/test-data-ubuntu# ls 100M.dat root@f3893d368e11:/test-data-ubuntu# touch ubuntu-writes-a-file.txt



When we check the Debian container, we can immediately see the written file, as the volume is transparently mounted.

stefan@stefan-desktop:~$ sudo docker exec -i -t debian-test ls -l /test-data total 102400 -rw-r–r– 1 root root 104857600 Aug 3 13:17 100M.dat -rw-r–r– 1 root root 0 Aug 3 13:42 ubuntu-writes-a-file.txt```

Please be aware that the docker volume is just a regular folder on the file system. Writing from both containers the same file can lead to data corruption. Also remember that you can read and write the volume files directly from the host system.

Backups and Migration

Backing up data is also an important aspect when you use named data volumes as shown above. Currently, there is no way of moving Docker containers or volumes natively to a different host. The intention of Docker is to make the creation and destruction  of containers very cheap and easy. So you should not get too attached to your containers, because you can re-create them very fast. This of course is not true for the data stored in volumes. So you need to take care of your data yourself, for instance by creating automated backups like this sudo tar cvfz Backup-data-volume-test.tar.gz /var/lib/docker/volumes/data-volume-test and re-store the data when needed in a new volume. How to backup volumes using a container is described here.

Plotting Colourful Graphs with R, RStudio and Ggplot2

The Aesthetics of Data Science

Data visualization is a powerful tool for communicating results and recently receives more and more attention due to the hype of data science. Integrating a meaningful graph into a paper or your thesis could improve readability and understandability more than any formulas or extended textual descriptions can. There exists a variety of different approaches for visualising data. Recently a lot of new Javascript based frameworks have gained quite some momentum, which can be used in Web applications and apps. A more classical work horse for data science is the R project and its plotting engine ggplot2. The reason why I decide to stick with R is its popularity and flexibility, which is still  impressive. Also with RStudio, there exists a convenient IDE which provides useful features for data scientists.

Plotting Graphs

In this blog post, I demonstrate how to plot time series data and use colours to highlight a specific aspect of data. As almost all techniques, R and ggplot2 require practise and training, which I realised again today when I spent quite a bit of time struggling with getting a simple plot right.

Currently I am evaluating two systems I developed and I needed to visualize their storage and execution time demands in comparison. My goal was to create a plot for each non-functional property, the execution time and the storage demand, while each plot should depict both systems’ performance. Each system runs a set of operations, think of create, read, update and delete operations (CRUD). Now for visualizing which of these operations has the most effects on the system, I needed to colourise each operation within one graph. This is the easy part. What was more tricky is to provide for each graph a defined set of colours, which can be mapped to each instance of the variable. Things which have the same meaning in both graphs should visualized in the same way, which requires a little hack.

Prerequisits

Install the following packages via apt

sudo apt-get install r-base r-recommended r-cran-ggplot2

and RStudio by downloading the deb – File from the project homepage.

Evaluation Data

As an example,we plan to evaluate the storage demand of two different systems and compare the results. Consider the following sample data.

# Set seed to get the same random numbers for this example
set.seed(42);
# Generate 200 random data records
N <- 200
# Generate a random, increasing sequence of integers that we assume is the storage demand in some unit
storage1 =sort(sample(1:100000, size = N, replace = TRUE),decreasing = FALSE)
storage2 = sort(sample(1:100000, size = N, replace = TRUE),decreasing = FALSE)
# Define the operations availabel and draw a random sample
operationTypes = c('CREATE','READ','UPDATE','DELETE')
operations = sample(operationTypes,N,replace=TRUE)
# Create the dataframe
df  df
     id storage1 storage2 operations
1     1       24      238     CREATE
2     2      139     1755     UPDATE
3     3      158     1869     UPDATE
4     4      228     2146       READ
5     5      395     2967     DELETE
6     6      734     3252     CREATE
7     7      789     4049     DELETE
8     8     2909     4109       READ
9     9     3744     4835     CREATE
10   10     3894     4990       READ

....

We created a random data set simulating the characteristics of system measurement data. As you can see, we have a list of operations of the four types CREATE, READ, UPDATE and DELETE and a measurement value for the storage demand in both systems.

The Simple Plot

Plotting two graphs of thecolumns storage1 and storage2 is straight forward.

# Simple plot
p1 <- ggplot(df, aes(x,y)) +
  geom_point(aes(x=id,y=storage1,color="Storage 1")) +
  geom_point(aes(x=id,y=storage2,color="Storage 2")) +
  ggtitle("Overview of Measurements") +
  xlab("Number of Operations") +
  ylab("Storage Demand in MB") +
  scale_color_manual(values=c("Storage 1"="forestgreen", "Storage 2"="aquamarine"), 
                     name="Measurements", labels=c("System 1", "System 2"))

print(p1)

We assign for each point plot a color. Note that the color nme “Storage 1” for instance of course does not denote a color, but it assignes a level for all points of the graph. This level can be thought of as a category, which ensures that all the points which belong to the same category have the same color. As you can see at the definition of the color scale, we assign the actual color to this level there.  This is the result:

Plotting Levels

A common task is to visualise categories or levels of measurement data. In this example, there are four different levels we could observe: CREATE, READ, UPDATE and DELETE.

# Plot with levels
p1 <- ggplot(df, aes(x,y)) +
  geom_point(aes(x=id,y=storage1,color=operations)) +
  geom_point(aes(x=id,y=storage2,color=operations)) +
  ggtitle("Overview of Measurements") +
  labs(color="Measurements") +
  scale_color_manual(values=c("CREATE"="darkgreen", 
                              "READ"="darkolivegreen", 
                              "UPDATE"="forestgreen", 
                              "DELETE"="yellowgreen"))
print(p1)

Instead of assigning two colours, one for each graph, we can also assign colours to the operations. As you can see in the definition of the graphs and the colour scale, we map the colours to the variable operations instead. As a result we get differently coloured points per operation, but we get these of course for both graphs in an identical fashion as the categories are the same for both measurements. The result looks like this:

Now this is obviously not what we want to achieve as we cannot differentiate between the two graphs any more.

Plotting the same Levels for both Graphs in Different Colours

This last part is a bit tricky, as ggplot2 does not allow assigning different colour schemes within one plot. There do exist some hacks for this, but the solution does not improve the readability of the code in my opinion. In order to apply different colour schemes for the two graphs while still using the categories, I appended two extra columns to the data set. If we append some differentiation between the two graphs and basically double the categories from four to eight, where each graph now uses its own four categories, we can also assign distinct colours to them.

df$operationsStorage1 <- paste(df$operations,"-Storage1", sep = '')
df$operationsStorage2 <- paste(df$operations,"-Storage2", sep = '')

p3 <- ggplot(df, aes(x,y)) +
  geom_point(aes(x=id,y=storage1,color=operationsStorage1)) +
  geom_point(aes(x=id,y=storage2,color=operationsStorage2)) +
  ggtitle("Overview of Measurements") +
  xlab("Number of Operations") +
  ylab("Storage Demand in MB") +
  labs(color="Operations") +
  scale_color_manual(values=c("CREATE-Storage1"="darkgreen", 
                              "READ-Storage1"="darkolivegreen", 
                              "UPDATE-Storage1"="forestgreen", 
                              "DELETE-Storage1"="yellowgreen",
                              "CREATE-Storage2"="aquamarine", 
                              "READ-Storage2"="dodgerblue",
                              "UPDATE-Storage2"="royalblue",
                              "DELETE-Storage2"="turquoise"))
print(p3)

We then assign the new column for each system individually as colour value. This ensures that each graph only considers the categories that we assigned in this step. Thus we can assign a different color scheme for wach graph and print the corresponding colours in the label (legend) next to the chart. This is the result:

Now we can see which operation was used at every measurement and still be able to distinguish between the two systems.

Von der Hungerburg zur Umbrüggler Alm, zur Höttinger Alm und zur Seegrube

Ausgehend vom Wanderwege-Hub Hungerburg, erreicht man hinter der Talstation die ausgeschilderten Wanderwege zur Umbrüggeler Alm und zur Arzler Alm.

Hält man sich links, gelangt man nach nur 30 Minuten die Umbrüggler Alm (Beschreibung hier). Man folgt der Beschilderung weiter und gelangt über Forstwege, einen kurzen Waldabschnitt und über die Skipiste nach einer weiteren Stunde zur Höttinger Alm (1487m). Als beinahe etwas bösartig erweist sich der letzte Anstieg, ruhigen Schrittes vorbei an den Rindern, bis zur Alm. Die letzten Höhenmeter ziehen sich, da man die Alm bereits gut im Blick hat.

Nach einer Stärkung geht es weiter entlang des Forstweges Richtung Nordosten. Man folgt der Beschilderung zur Bodensteinalm und quert nahezu steigungsfrei den Hang, bis man eine Gabelung erreicht. Man hält sich bergwärts und erreicht nach ca. 200 Höhenmetern die Bodensteinalm (1661m) entlang eines Forstweges. Man kann nun entweder der Forststraße bis zur Seegrube folgen, oder quält sich über den Seegrubenbahnsteig unterhalb der ebensolchen hinauf bis zur Bergstation. Von der Höttinger Alm bis zur Seegrubenbahnbergstation (1905) benötigt man etwa eineinhalb Stunden. Belohnt wird man mit einem traumhaften Ausblick über Innsbruck und das Inntal sowie auf die unzähligen Touristen in Flipflops und Seidenblusen. Man schließe sich diesen an und gleite bequem ins Tal. Eine weitere Beschreibung der Tour fndet sich hier.

Timelape Photography with the Camera Module V2 and a Raspberry Pi Model B

Recently, I bought a camera module for the Raspberry Pi and experimented a little bit with the possibilities a scriptable camera provides. The new Camera Module V2 offers 8.08 MP from a Sony sensor and can be controlled with a well documented Python library. It allows to take HD videos and shoot still images. Assembly is easy, but as the camera is attached with a rather short ribbon cable, which renders the handling is a bit cumbersome. For the moment, a modified extra hand from my soldering kit acts as a makeshift.

Initial Setup

The initial setup is easy and just requires a few steps, which is not surprising because most of the documentation is targeted to kids in order to encourage their inner nerd. Still works for me as well 🙂

Attach the cable to the raspberry pi as described here. You can also buy an adapter for the Pi Zero. Once the camera is wired to the board, activate the module with the tool raspi-config.

Then you are ready to install the Python library with sudo apt-get install python3-picamera, add your user to the video group with usermod -a -G video USERNAME  and then reboot the Raspberry. After you logged in again, you can start taking still images with the simple command raspistill -o output.jpg. You can find some more documentation and usage examples here.

Timelapse Photography

What I really enjoy is making timelapse videos with the Raspberry Pi, which gives a nice effect for everyday phenomena and allows to observe processes which are usually too slow to follow. The following Gif shows a melting ice cube. I took one picture every five seconds.

A Small Python Script

The following script creates a series of pictures with a defined interval and stores all images with a filename indicating the time of shooting in a folder. It is rather self explanatory. The camera needs a little bit of time to adjust, so we set the adjustTime variable to 5 seconds. Then we take a picture every 300 seconds, each image has a resolution of 1024×768 pixels.

import os
import time
import picamera
from datetime import datetime

# Grab the current datetime which will be used to generate dynamic folder names
d = datetime.now()
initYear = "%04d" % (d.year)
initMonth = "%02d" % (d.month)
initDate = "%02d" % (d.day)
initHour = "%02d" % (d.hour)
initMins = "%02d" % (d.minute)
initSecs = "%02d" % (d.second)

folderToSave = "timelapse_" + str(initYear) + str(initMonth) + str(initDate) +"_"+ str(initHour) + str(initMins)
os.mkdir(folderToSave)

# Set the initial serial for saved images to 1
fileSerial = 1

# Create and configure the camera
adjustTime=5
pauseBetweenShots=300

# Create and configure the camera
with picamera.PiCamera() as camera:
    camera.resolution = (1024, 768)
    #camera.exposure_compensation = 5

    # Start the preview and give the camera a couple of seconds to adjust
    camera.start_preview()
    try:
        time.sleep(adjustTime)

        start = time.time()
        while True:
            d = datetime.now()
            # Set FileSerialNumber to 000X using four digits
            fileSerialNumber = "%04d" % (fileSerial)

            # Capture the CURRENT time (not start time as set above) to insert into each capture image filename
            hour = "%02d" % (d.hour)
            mins = "%02d" % (d.minute)
            secs = "%02d" % (d.second)
            camera.capture(str(folderToSave) + "/" + str(fileSerialNumber) + "_" + str(hour) + str(mins) + str(secs) + ".jpg")

            # Increment the fileSerial
            fileSerial += 1
            time.sleep(pauseBetweenShots)

    except KeyboardInterrupt:
        print ('interrupted!')
        # Stop the preview and close the camera
        camera.stop_preview()

finish = time.time()
print("Captured %d images in %d seconds" % (fileSerial,finish - start))

This script then can run unattended and it creates a batch of images on the Raspberry Pi.

Image Metadata

The file name preserves the time of the shot, so that we can see later when a picture was taken. But the tool also stores EXIF metadata, which can be used for processing. You can view the data with the exiftool.

>ExifTool Version Number         : 9.46
File Name                       : 1052.jpg
Directory                       : .
File Size                       : 483 kB
File Modification Date/Time     : 2016:07:08 08:49:52+02:00
File Access Date/Time           : 2016:07:08 09:19:14+02:00
File Inode Change Date/Time     : 2016:07:08 09:17:52+02:00
File Permissions                : rw-r--r--
File Type                       : JPEG
MIME Type                       : image/jpeg
Exif Byte Order                 : Big-endian (Motorola, MM)
Make                            : RaspberryPi
Camera Model Name               : RP_b'imx219'
X Resolution                    : 72
Y Resolution                    : 72
Resolution Unit                 : inches
Modify Date                     : 2016:07:05 08:37:33
Y Cb Cr Positioning             : Centered
Exposure Time                   : 1/772
F Number                        : 2.0
Exposure Program                : Aperture-priority AE
ISO                             : 50
Exif Version                    : 0220
Date/Time Original              : 2016:07:05 08:37:33
Create Date                     : 2016:07:05 08:37:33
Components Configuration        : Y, Cb, Cr, -
Shutter Speed Value             : 1/772
Aperture Value                  : 2.0
Brightness Value                : 2.99
Max Aperture Value              : 2.0
Metering Mode                   : Center-weighted average
Flash                           : No Flash
Focal Length                    : 3.0 mm
Maker Note Unknown Text         : (Binary data 332 bytes, use -b option to extract)
Flashpix Version                : 0100
Color Space                     : sRGB
Exif Image Width                : 1024
Exif Image Height               : 768
Interoperability Index          : R98 - DCF basic file (sRGB)
Exposure Mode                   : Auto
White Balance                   : Auto
Compression                     : JPEG (old-style)
Thumbnail Offset                : 1054
Thumbnail Length                : 24576
Image Width                     : 1024
Image Height                    : 768
Encoding Process                : Baseline DCT, Huffman coding
Bits Per Sample                 : 8
Color Components                : 3
Y Cb Cr Sub Sampling            : YCbCr4:2:0 (2 2)
Aperture                        : 2.0
Image Size                      : 1024x768
Shutter Speed                   : 1/772
Thumbnail Image                 : (Binary data 24576 bytes, use -b option to extract)
Focal Length                    : 3.0 mm
Light Value                     : 12.6

Processing Images

The Raspberry Pi would need a lot of time to create an animated Gif or a video from these images. This is why I decided to add new images automatically to a Git repository on Github and fetch the results on my Desktop PC. I created a new Git repository and adapted the script shown above to store the images within the folder of the repository. I then use the following script to add and push the images to Github using a cronjob.

>cd /home/stefan/Github/Timelapses
now=$(date +"%m_%d_%Y %H %M %S")
echo $now
git pull
git add *.jpg
git commit -am "New pictures added $now"
git push

You can add this to you user’s cron table with crontab -e and the following line, which adds the images every 5 minutes,

*/5	*	*	*	*	/home/stefan/Github/Timelapses/addToGit.sh

On a more potent machine, you can clone the repository and pull the new images like this:

cd /home/stefan-srv/Github/Timelapses
now=$(date +"%m_%d_%Y %H %M %S")
echo "$now"
git pull --rebase

The file names are convenient for being able to read the date when it was taken, but most of the Linux tools require the files to be named within a sequence. The following code snippet renames the files into a sequence with four digits and pads them with zeros if possible.

>a=1
for i in *.jpg; do
  new=$(printf "%04d.jpg" "$a") #04 pad to length of 4
  mv -- "$i" "$new"
  let a=a+1
done

Animated Gifs

Imagemagick offers a set of great tools for images processing. With its submodule convert, you can create animated Gifs from a series of images like this:

convert -delay 10 -loop 0 *.jpg Output.gif

This adds a delay after each images and loops the gif images infinitely. ImageMagick requires a lot of RAM for larger Gif images and does not handle memory allocation well, but the results are still nice. Note that the files get very large, so a smaller resolution might be more practical.

Still Images to Videos

The still images can also be converted in videos. Use the following command to create an image with 10 frames per second:

>avconv -framerate 10 -f image2 -i %04d.jpg -c:v h264 -crf 1 out.mov

Example: Nordkette at Innsbruck, Tirol

This timelapse video of the Inn Valley Range in the north of the city of Innsbruck has been created by taking a picture with a Raspberry Pi Camera Module V2 every 5 minutes. This video consists of 1066 still images.

IntelliJ IDEA and the ClassNotFoundException

When compiling nested Maven projects in Idea, sometimes the compiler complains about a missing class file.

This occurs on several occasions, depending which part of a project is compiled and what dependencies have been considered. If the project is large this can easily happen, when a specific class should be compiled without having the complete context available. Besides the tipps such as invalidate caches and the ones I found here and here, editing the build configuration of a project helps. Add the a task “Make Project” and the correct class files should be compiled and available.

Hikari Connection Pooling with a MySQL Backend, Hibernate and Maven

Conection Pooling?

JDBC connection pooling is a great concept, which improves the performance of database driven applications by reusing connections. The benefit from connection pools is that the cost of creating and closing connections is avoided, by reusing connections from a pool of available connections. Database systems such as MySQL also assign database resources by limiting simultaneous connections. This is another reason, why connection pools have benefits in contrast to opening and closing individual connections.

Dipping into Pools

There exists a selection of different JDBC compatible connection pools which can be used more or less interchangeable. The most widely used pools are:

Most of these pools work in a very similar way. In the following tutorial, we are going to take out HikariCP for a spin. It is simple to use and claims to be very fast. In the following we are going to setup a small project using the following technologies:

  • Java 8
  • Tomcat 8
  • MySQL 5.7
  • Maven 3
  • Hibernate 5

and of course an IDE of your choice (I have become quite fond of IntelliJ IDEA Community Edition).

Project Overview

In this small demo project, we are going to write a minimalistic Web application, which simply computes a new random number for each request and stores the result in a database table. We use Java and store the data by using the Hibernate ORM framework.We also assume, that you have a running Apache Tomcat Servlet Container and also a running MySQL instance available.

In the first step, I created a basic Web project by selecting the Maven Webapp archetype, which then creates a basic structure we can work with.

Adding the Required Libraries

After we created the initial project, we need to add the required libraries. We can achieve this easily with Maven, by adding the dependency definitions to our pom.xml file. You can find these definitions at maven central. The build block contains the plugin for deploying the application at the Tomcat server.

<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
  xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd">
  <modelVersion>4.0.0</modelVersion>
  <groupId>at.stefanproell</groupId>
  <artifactId>HibernateHikari</artifactId>
  <packaging>war</packaging>
  <version>1.0-SNAPSHOT</version>
  <name>HibernateHikari Maven Webapp</name>

  <dependencies>
    <dependency>
      <groupId>junit</groupId>
      <artifactId>junit</artifactId>
      <version>3.8.1</version>
      <scope>test</scope>
    </dependency>
      <dependency>
          <groupId>org.apache.tomcat</groupId>
          <artifactId>tomcat-servlet-api</artifactId>
          <version>7.0.50</version>
      </dependency>
      <dependency>
          <groupId>mysql</groupId>
          <artifactId>mysql-connector-java</artifactId>
          <version>5.1.39</version>
      </dependency>
      <dependency>
          <groupId>org.hibernate</groupId>
          <artifactId>hibernate-core</artifactId>
          <version>5.2.0.Final</version>
      </dependency>
      <dependency>
          <groupId>com.zaxxer</groupId>
          <artifactId>HikariCP</artifactId>
          <version>2.4.6</version>
      </dependency>
  </dependencies>
    
  <build>
    <finalName>HibernateHikari</finalName>
      <plugins>
          <plugin>
              <groupId>org.apache.tomcat.maven</groupId>
              <artifactId>tomcat7-maven-plugin</artifactId>
              <version>2.0</version>
              <configuration>
                  <path>/testapp</path>
                  <update>true</update>

                  <url>http://localhost:8080/manager/text</url>
                  <username>admin</username>
                  <password>admin</password>

              </configuration>

          </plugin>
          <plugin>
              <groupId>org.apache.maven.plugins</groupId>
              <artifactId>maven-war-plugin</artifactId>
              <version>2.4</version>

          </plugin>
      </plugins>
  </build>
</project>

Now we have all the libraries we need available and we can begin with implementing the functionality.

The Database Table

As we want to persist random numbers, we need to have a database table, which will store the data. Create the following table in MySQL and ensure that you have a test user available:

CREATE TABLE `TestDB`.`RandomNumberTable` (
  `id` INT NOT NULL AUTO_INCREMENT,
  `randomNumber` INT NOT NULL,
  PRIMARY KEY (`id`));```


## POJO Mojo: The Java Class to be Persisted

Hibernate allows us to persist Java objects in the database, by annotating the Java source code. The following Java class is used to store the random numbers that we generate.

@Entity @Table(name="RandomNumberTable”, uniqueConstraints={@UniqueConstraint(columnNames={“id”})}) public class RandomNumberPOJO { @Id @GeneratedValue(strategy= GenerationType.IDENTITY) @Column(name="id”, nullable=false, unique=true, length=11) private int id;

@Column(name="randomNumber", nullable=false)
private int randomNumber;

public int getId() {
    return id;
}

public void setId(int id) {
    this.id = id;
}

public int getRandomNumber() {
    return randomNumber;
}

public void setRandomNumber(int randomNumber) {
    this.randomNumber = randomNumber;
}

}



The code and also the annotations are straight forward. Now we need to define a way how we can connect to the database and let Hibernate handle the mapping between the Java class and the database schema we defined before.

## Hibernate Configuration

Hibernate looks for the configuration in a file called hibernate.cfg.xml by default. This file is used to provide the connection details for the database.

    <property name="hibernate.dialect">org.hibernate.dialect.MySQLDialect</property>
    <property name="hibernate.connection.provider_class">com.zaxxer.hikari.hibernate.HikariConnectionProvider</property>
    <property name="hibernate.hikari.dataSource.url">jdbc:mysql://localhost:3306/TestDB?useSSL=false</property>
    <property name="hibernate.hikari.dataSource.user">testuser</property>
    <property name="hibernate.hikari.dataSource.password">sEcRet</property>
    <property name="hibernate.hikari.dataSourceClassName">com.mysql.jdbc.jdbc2.optional.MysqlDataSource</property>
    <property name="hibernate.hikari.dataSource.cachePrepStmts">true</property>
    <property name="hibernate.hikari.dataSource.prepStmtCacheSize">250</property>
    <property name="hibernate.hikari.dataSource.prepStmtCacheSqlLimit">2048</property>
    <property name="hibernate.hikari.dataSource.useServerPrepStmts">true</property>
    <property name="hibernate.current_session_context_class">thread</property>

</session-factory>

The file above contains the most essential settings. We specify the database dialect that we speak `org.hibernate.dialect.MySQLDialect`, define the connection provider class (the Hikari CP) with `com.zaxxer.hikari.hibernate.HikariConnectionProvider` and provide the URL to our MySQL database (`jdbc:mysql://localhost:3306/TestDB?useSSL=false`) including the username and password for the database connection. Alternatively, you can also define the same information in the hibernate.properties file.

## The Session Factory

We need to have a session factory, which initializes the database connection and the connection pool as well as handles the interaction with the database server. We can use the following class, which provides the session object for these tasks.

import javax.servlet.ServletContextEvent; import javax.servlet.ServletContextListener; import javax.servlet.annotation.WebListener;

import org.hibernate.SessionFactory; import org.hibernate.boot.registry.StandardServiceRegistryBuilder; import org.hibernate.cfg.Configuration; import org.hibernate.service.ServiceRegistry; import org.jboss.logging.Logger;

@WebListener public class HibernateSessionFactoryListener implements ServletContextListener {

public final Logger logger = Logger.getLogger(HibernateSessionFactoryListener.class);

public void contextDestroyed(ServletContextEvent servletContextEvent) {
    SessionFactory sessionFactory = (SessionFactory) servletContextEvent.getServletContext().getAttribute("SessionFactory");
    if(sessionFactory != null && !sessionFactory.isClosed()){
        logger.info("Closing sessionFactory");
        sessionFactory.close();
    }
    logger.info("Released Hibernate sessionFactory resource");
}

public void contextInitialized(ServletContextEvent servletContextEvent) {
    Configuration configuration = new Configuration();
    configuration.configure("hibernate.cfg.xml");
    // Add annotated class
    configuration.addAnnotatedClass(RandomNumberPOJO.class);

    ServiceRegistry serviceRegistry = new StandardServiceRegistryBuilder().applySettings(configuration.getProperties()).build();
    logger.info("ServiceRegistry created successfully");
    SessionFactory sessionFactory = configuration
            .buildSessionFactory(serviceRegistry);
    logger.info("SessionFactory created successfully");

    servletContextEvent.getServletContext().setAttribute("SessionFactory", sessionFactory);
    logger.info("Hibernate SessionFactory Configured successfully");
}

}



This class provides two so called contexts, where the session gets initialized and a second one where it gets destroyed. The Tomcat Servlet container automatically calls these depending on the state of the session. You can see that the filename of the configuration file is provided (<span class="lang:default decode:true crayon-inline">configuration.configure(&#8220;hibernate.cfg.xml&#8221;);`) and that we tell Hibernate, to map our RandomNumberPOJO file (`configuration.addAnnotatedClass(RandomNumberPOJO.class);`). Now all that is missing is the Web component, which is waiting for our requests.

## The Web Component

The last part is the Web component, which we kept as simple as possible.

import org.hibernate.Session; import org.hibernate.SessionFactory; import org.hibernate.Transaction; import javax.persistence.TypedQuery; import javax.servlet.ServletException; import javax.servlet.http.HttpServlet; import javax.servlet.http.HttpServletRequest; import javax.servlet.http.HttpServletResponse;

import java.io.IOException; import java.io.PrintWriter;

import java.util.List; import java.util.Random;

public class HelloServlet extends HttpServlet { public void doGet (HttpServletRequest req, HttpServletResponse res) throws ServletException, IOException { PrintWriter out = res.getWriter(); addRandomNumber(req); out.println(“There are " + countNumbers(req) + " random numbers”);

    List<RandomNumberPOJO> numbers = getAllRandomNumbers(req,res);

    out.println("Random Numbers:");
    out.println("----------");

    for(RandomNumberPOJO record:numbers){
        out.println("ID: " + record.getId() + "\t :\t" + record.getRandomNumber());
    }

    out.close();

}

/**
 * Create a new random number and store it the database
 * @param request
 */
private void addRandomNumber(HttpServletRequest request){
    SessionFactory sessionFactory = (SessionFactory) request.getServletContext().getAttribute("SessionFactory");

    Session session = sessionFactory.getCurrentSession();
    Transaction tx = session.beginTransaction();
    RandomNumberPOJO randomNumber = new RandomNumberPOJO();
    Random rand = new Random();
    int randomInteger = 1 + rand.nextInt((999) + 1);

    randomNumber.setRandomNumber(randomInteger);
    session.save(randomNumber);
    tx.commit();
    session.close();
}

/**
 * Get a list of all RandomNumberPOJO objects
 * @param request
 * @param response
 * @return
 */
private List<RandomNumberPOJO> getAllRandomNumbers(HttpServletRequest request, HttpServletResponse response){
    SessionFactory sessionFactory = (SessionFactory) request.getServletContext().getAttribute("SessionFactory");
    Session session = sessionFactory.getCurrentSession();
    Transaction tx = session.beginTransaction();
    TypedQuery<RandomNumberPOJO> query = session.createQuery(
            "from RandomNumberPOJO", RandomNumberPOJO.class);

    List<RandomNumberPOJO> numbers =query.getResultList();



    tx.commit();
    session.close();

    return numbers;


}

/**
 * Count records
 * @param request
 * @return
 */
private int countNumbers(HttpServletRequest request){
    SessionFactory sessionFactory = (SessionFactory) request.getServletContext().getAttribute("SessionFactory");
    Session session = sessionFactory.getCurrentSession();
    Transaction tx = session.beginTransaction();

    String count = session.createQuery("SELECT COUNT(id) FROM RandomNumberPOJO").uniqueResult().toString();

    int rowCount = Integer.parseInt(count);

    tx.commit();
    session.close();
    return rowCount;
}

}



This class provides the actual servlet and is executed whenever a user calls the web application. First, a new RandumNumberPOJO object is instantiated and persisted. We then count how many numbers we already have and then we fetch a list of all existing records.

The last step before we can actually run the application is the definition of the web entry points, which we can define in the file called web.xml. This file is already generated by the maven achetype and we only need to add a name for our small web service and provide a mapping for the entry class.

HikariCP Test App

<servlet>
    <servlet-name>hello</servlet-name>
    <servlet-class>HelloServlet</servlet-class>
</servlet>

<servlet-mapping>
    <servlet-name>hello</servlet-name>
    <url-pattern>/hello</url-pattern>
</servlet-mapping>

```

Compile and Run

We can then  compile and deploy the application with the following command:

mvn clean install org.apache.tomcat.maven:tomcat7-maven-plugin:2.0:deploy -e

This will compile and upload the application to the Tomcat server and we can then use our browser, open the URL http://localhost:8080/testapp/hello  to create and persist random numbers by refreshing the page. The result will look similar like this:

WAMS Containerstandorte in Innsbruck

Der Verein WAMS ist ein Sozialbetrieb mit dem Ziel Arbeitsplätze für Menschen zu schaffen, die aufgrund ihrer besonderen Lebenssituationen im konventionellen Arbeitsmarkt benachteiligt werden. Ein besonderes Augenmerk des Vereins liegt auch auf dem Recyclinggedanken und der Wiederverwertung von Ressourcen. Aus diesem Grund betreut und betreibt der Verein auch Altkleidersammelstellen und verwertet das gespendete Gewand. Eine Liste der Standorte dieser gelben Container befindet sich im Flyer von WAMS, der auf der Vereinshomepage bezogen werden kann. Für all jene, denen die Innsbrucker Straßennamen noch nicht allzu viel sagen oder deren geographisches Gedächtnis lückenhaft ist, habe ich die Standorte auf einer Karte eingetragen.

Der Sourcecode ist auf Github verfügbar und hier beschrieben.

Calling Back: An Example for Google’s Geocoder Service and Custom Markers

I recently moved and naturally there was a lot of clothes which I do not need (read: do not fit in) any more. Throwing them away would be a waste and luckily, there is a social business called WAMS which (besides a lot of other nice projects) supports reuse and recycling. WAMS provides and maintains containers for collecting clothes on many locations in Tirol. Unfortunately, there is not yet a map available to find them easily. I took this as an opportunity for a little side project in Javascript. I am not affiliated with WAMS, but of course the code and data is open sourced here.

Idea

The idea was quite simple. I used some of the container addresses I found in the flyer and created a custom Google Map showing the locations of the containers. The final result looks like this and a live demo can be found at the Github page.

Retrieve Geolocation Information

The Google API allows to retrieve latitude and longitude data from any given address. If the address was found in Google’s database, the Server returns a GeocoderResult object containing the geometry information about the found object. This GeocoderGeometry contains the latitude and longitude data of the address. The first step retrieves the data from Google’s API by using the Geocoder class. To do so, the following JSON structure is iterated and the addresses are being fed to the Geocoding service.

{
                "containerStandorte": [
                    {
                        "id": "1",
                        "name": "Pfarre Allerheiligen",
                        "address": "St.-Georgs-Weg 15, 6020, Innsbruck, Austria",
                        "latitude": "",
                        "longitude: "":
                    },
                    {
                        "id": "2",
                        "name": "Endhaltestelle 3er Linie Amras",
                        "address": "Philippine-Welser-Straße 49, 6020, Innsbruck, Austria",
                        "latitude": "",
                        "longitude": ""
            }

The Javascript code for obtaining the data is shown in the following listing:

window.onload = function() {

    // Data
    var wamsData =
 '{ "containerStandorte" : [' +
    '{ "id":"1", "name":"Pfarre Allerheiligen" , "address":"St.-Georgs-Weg 15, 6020, Innsbruck, Austria" , "latitude":"", "longitude":"" },' +
    '{ "id":"2", "name":"Endhaltestelle 3er Linie Amras" , "address":"Philippine-Welser-Straße 49, 6020, Innsbruck, Austria" , "latitude":"", "longitude":"" },' +
    '{ "id":"3", "name":"DEZ Einkaufszentrum Parkgarage" , "address":"Amraser-See-Straße 56a,6020 Innsbruck, Austria" , "latitude":"", "longitude":"" },' +
    '{ "id":"4", "name":"Wohnanlage Neue Heimat" , "address":"Geyrstraße 27-29, 6020 Innsbruck, Austria" , "latitude":"", "longitude":"" },' +
    '{ "id":"5", "name":"MPREIS Haller Straße" , "address":"Hallerstraße 212, 6020 Innsbruck, Austria" , "latitude":"", "longitude":"" },' +
    '{ "id":"6", "name":"Recyclinginsel Novapark" , "address":"Arzlerstraße 43, 6020 Innsbruck, Austria" , "latitude":"", "longitude":"" },' +
    '{ "id":"7", "name":"Höhenstraße / Hungerburg (neben Spar)" , "address":"Höhenstraße 125,6020 Innsbruck, 6020, Innsbruck, Austria" , "latitude":"", "longitude":"" },' +
    '{ "id":"8", "name":"Recyclinginsel Schneeburggasse" , "address":"Schneeburggasse 116, 6020 Innsbruck, Austria" , "latitude":"", "longitude":"" },' +
    '{ "id":"9", "name":"MPreis Fischerhäuslweg 31" , "address":"Fischerhäuslweg 31, 6020 Innsbruck, Austria" , "latitude":"", "longitude":"" },' +
    '{ "id":"10", "name":"Pfarre Petrus Canisius" , "address":"Santifallerstraße 5,6020 Innsbruck Austria" , "latitude":"", "longitude":"" },' +
    '{ "id":"11", "name":"MPreis Bachlechnerstraße" , "address":"Bachlechnerstraße 46, 6020 Innsbruck" , "latitude":"", "longitude":"" }'
    +' ]}';

    // Google Geocoder Library
    var geocoder = new google.maps.Geocoder();
    // Parse the JSON string into a javascript object.
    var wamsJSON = JSON.parse(wamsData);


    /**
        Iterate over containers and retrieve the geo location for their address.
    */
    function processContainers(){
        // Store amount of containers
        var amountOfContainers = wamsJSON.containerStandorte.length;
        // Iterate over all containers
        for (var i=0;i<amountOfContainers;i++){
            var container = wamsJSON.containerStandorte[i];
            // Encode the address of the container
            geocodeAddress(container, processContainerLocationCallback);
        };
    };

    /**
        Process the results
    */
    function processContainerLocationCallback(container,lat,long){
        wamsJSON = updateJSON(container,lat,long, printJSONCallback);
    }

    /**
        Update the JSON object and store the latitude and longitude information
    */
    function updateJSON(container,lat,long,printJSONCallback){
        // Store amount of containers
        var amountOfContainers = wamsJSON.containerStandorte.length;
        // Iterate over containers
        for (var i=0;i<amountOfContainers;i++){
            // Pick the correct id and store the data
            if(wamsJSON.containerStandorte[i].id==container.id){
                wamsJSON.containerStandorte[i].latitude=lat;
                wamsJSON.containerStandorte[i].longitude=long;
            }
        };
        // When the update is done, call the displayCallback
        printJSONCallback();
        return wamsJSON;
    };

    /*
        Google's Geocoder function takes and address as input and retrieves
        (among other data) the latitude and longitude of the provided address.
        Note that this is an asynchronous call, the response may take some time.
        Also remember that the processContainerLocationCallback which is given as
        an input parameter is just a variable. A variable which happens to be a function.

    */
    function geocodeAddress(container, processContainerLocationCallback){
        var address = container.address;
        geocoder.geocode( { 'address': address}, function(results, status) {
            // Anonymous function to process results.
            if (status == google.maps.GeocoderStatus.OK) {
                lat=results[0].geometry.location.lat();
                long=results[0].geometry.location.lng();
                // When the results have been retrieved,process them in the function processContainerLocationCallback
                processContainerLocationCallback(container, lat,long);
            } else {
                alert("Geocode was not successful for the following reason: " + status);
            }
        });
    };

    // Print the result
    function printJSONCallback(){
        var jsonString = JSON.stringify(wamsJSON, null,4);
        console.log(jsonString);
        document.getElementById("jsonOutput").innerHTML = jsonString;
    }

    // Start processing
    processContainers();
}

As the calls to the Google Services asynchronously, we need to use callbacks which are called when the function before has finished. Callbacks can be tricky and are a bit of a challenge to understand the first time. Especially the Google Geocoder methods require to work with several callbacks, which is often referred to as callback hell. The code above does the following things:

  1. Iterate over the JSON structure process each container individually -> function processContainers()
  2. For each container, call Google’s Geocoder and resolve the address to a location -> geocodeAddress(container, processContainerLocationCallback)
  3. After the result has been obtained, process the result. -> processContainerLocationCallback(container,lat,long)
  4. Update the JSON object by looping over all records and search for the correct id. Once the id was found, update latitude and longitude information. -> updateJSON(container,lat,long,printJSONCallback)
  5. Write the result to the Web page -> printJSONCallback()

The missing latitude and longitude values are retrieved and the JSON gets updated. The final result looks like this:

{
                "containerStandorte": [
                    {
                        "id": "1",
                        "name": "Pfarre Allerheiligen",
                        "address": "St.-Georgs-Weg 15, 6020, Innsbruck, Austria",
                        "latitude": 47.2680316,
                        "longitude": 11.355563999999958
                    },
                    {
                        "id": "2",
                        "name": "Endhaltestelle 3er Linie Amras",
                        "address": "Philippine-Welser-Straße 49, 6020, Innsbruck, Austria",
                        "latitude": 47.2589929,
                        "longitude": 11.42600379999999
                    }
                    ...
            }

Now that we have the data ready, we can proceed with the second step.

Placing the Markers

I artistically created a custom marker image which we will use to indicate the location of a clothes container from WAMS.

This image replaces the Google standard marker. Now all that is left is that we iterate over the updated JSON object, which now contains also the latitude and longitude data and place a marker for each container. Note that hovering over the image displays the address of the container on the Map.

// Data for container locations
        var wamsData = '{"containerStandorte":[{"id":"1","name":"Pfarre Allerheiligen",
"address":"St.-Georgs-Weg 15, 6020, Innsbruck, Austria","latitude":47.2680316,"longitude":11.355563999999958},
{"id":"2","name":"Endhaltestelle 3er Linie Amras","address":"Philippine-Welser-Straße 49, 6020, Innsbruck, Austria","latitude":47.2589929,"longitude":11.42600379999999},
{"id":"3","name":"DEZ Einkaufszentrum Parkgarage","address":"Amraser-See-Straße 56a,6020 Innsbruck, Austria","latitude":47.2625925,"longitude":11.430842299999995},
{"id":"4","name":"Wohnanlage Neue Heimat","address":"Geyrstraße 27-29, 6020 Innsbruck, Austria","latitude":47.2614899,"longitude":11.426765700000033},
{"id":"5","name":"MPREIS Haller Straße","address":"Hallerstraße 212, 6020 Innsbruck, Austria","latitude":47.2769524,"longitude":11.442559599999981},
{"id":"6","name":"Recyclinginsel Novapark","address":"Arzlerstraße 43, 6020 Innsbruck, Austria","latitude":47.2833947,"longitude":11.424273299999982},
{"id":"7","name":"Höhenstraße / Hungerburg (neben Spar)","address":"Höhenstraße 125,6020 Innsbruck, 6020, Innsbruck, Austria","latitude":47.2841353,"longitude":11.394666799999982},
{"id":"8","name":"Recyclinginsel Schneeburggasse","address":"Schneeburggasse 116, 6020 Innsbruck, Austria","latitude":47.2695889,"longitude":11.364059699999984},
{"id":"9","name":"MPreis Fischerhäuslweg 31","address":"Fischerhäuslweg 31, 6020 Innsbruck, Austria","latitude":47.261875,"longitude":11.364496700000018},
{"id":"10","name":"Pfarre Petrus Canisius","address":"Santifallerstraße 5,6020 Innsbruck Austria","latitude":47.2635626,"longitude":11.380990800000063},
{"id":"11","name":"MPreis Bachlechnerstraße","address":"Bachlechnerstraße 46, 6020 Innsbruck","latitude":47.2645067,"longitude":11.376220800000056}]}';


        function initialize() {
          var innsbruck = { lat: 47.2656733, lng: 11.3941983 };
          var map = new google.maps.Map(document.getElementById('map'), {
            zoom: 14,
            center: innsbruck
          });

          // Load Google Geocoder Library
          var geocoder = new google.maps.Geocoder();
          // Parse the data into a JSON
          var wamsJSON = JSON.parse(wamsData);
          // Iterate over all containers
          for(var i =0; i < wamsJSON.containerStandorte.length;i++){
              var container = wamsJSON.containerStandorte[i];
              placeMarkerOnMap(geocoder, map, container);
          };
        }

    // Custom marker
    var wamsLogo = {
      url: 'images/wams.png',
      // This marker is 20 pixels wide by 32 pixels high.
      size: new google.maps.Size(64, 64),
      // The origin for this image is (0, 0).
      origin: new google.maps.Point(0, 0),
      // The anchor for this image is the base of the flagpole at (0, 32).
      anchor: new google.maps.Point(64, 70)
    };

    // Define the shape for the marker
    var shape = {
      coords: [1, 1, 1, 64, 64, 64, 64, 1],
      type: 'poly'
    };

    // Place marker on the map
    function placeMarkerOnMap(geocoder, resultsMap, container) {
        // Create Google position object with latitude and longitude from the container object
        var positionLatLng = new google.maps.LatLng(parseFloat(container.latitude),parseFloat(container.longitude));
        // Create marker with the position, logo and address
        var marker = new google.maps.Marker({
          map: resultsMap,
          position: positionLatLng,
          icon: wamsLogo,
          shape: shape,
          title: container.address,
          animation: google.maps.Animation.DROP
        });
    }
    // Place marker on the map
    google.maps.event.addDomListener(window, 'load', initialize);

Hosting the Result

Github offers a great feature for hosting simple static Web pages. All is needed is a new orphan branch of your project, which is named gh-pages, as described here. This branch serves as the Web directory for all your files and allows to host Web pages for public projects for free. You can see the result of the project above here.

Encrypt a USB Drive (or any other partition) Using LUKS

Did you ever want to feel like secret agent or do you really need to transport and exchange sensitive data? Encrypting your data is not much effort and can be used to protect a pen drive or any partition and the data on it from unauthorized access. In the following example you see how to create an encrypted partition on a disk. Note two things: If you accidentally encrypt the wrong partition, the data is lost. For ever. So be careful when entering the commands below. Secondly, the method shown below only protects the data at rest. As soon as you decrypt and mount the device, the data can be read from everyone else if you do not use correct permissions.

Preparation

Prepare a mount point for your data and change ownership.

# Create a mount point
sudo mkdir /media/cryptoUSB
# Set permissions for the owner
sudo chown stefan:stefan /media/cryptoUSB

Create an Encrypted Device

Encrypt the device with LUKS. Note that all data on the partition will be overwritten during this process.

# Create encrypted device 
sudo cryptsetup --verify-passphrase luksFormat /dev/sdX -c aes -s 256 -h sha256

# From the man page:
       --cipher, -c 
              Set the cipher specification string.
       --key-size, -s 
              Sets  key  size in bits. The argument has to be a multiple of 8.
              The possible key-sizes are limited by the cipher and mode used.
       --verify-passphrase, -y
              When interactively asking for a passphrase, ask for it twice and
              complain  if  both  inputs do not match.
       --hash, -h 
              Specifies the passphrase hash for open (for  plain  and  loopaes
              device types).

# Open the Device
sudo cryptsetup luksOpen /dev/sdX cryptoUSB
# Create a file system (ext3)
sudo mkfs -t ext3 -m 1 -O dir_index,filetype,sparse_super /dev/mapper/cryptoUSB
# Add a label
sudo tune2fs -L Crypto-USB /dev/mapper/cryptoUSB
# Close the devicesudo cryptsetup luksClose cryptoUSB

Usage

The usage is pretty simple. With a GUI you will be prompted for decrypting the device. At the command line, use the following commads to open and decrypt the device.

# Open the Device
sudo cryptsetup luksOpen /dev/sdcX cryptoUSB
# Mount it
sudo mount /dev/mapper/cryptoUSB /media/cryptoUSB

When you are finished with your secret work, unmount and close the device properly.

sudo umount /media/cryptoUSB 
sudo cryptsetup luksClose cryptoUSB

Secure Automated Backups of a Linux Web Server with Rrsync and Passwordless Key Based Authentication

Backups Automated and Secure

Backing up data is an essential task, yet it can be cumbersome and requires some work. As most people are lazy and avoid tedious tasks wherever possible, automation is the key, as it allows us dealing with more interesting work instead. In this article, I describe how a Linux Web server can be backed up in a secure way by using restricted SSH access to the rsync tool. I found a great variety of useful blog posts, which I will reuse in this article.

This is what we want to achieve:

  • Secure data transfer via SSH
  • Passwordless authentication via keys
  • Restricted rsync access
  • Backup of all files by using a low privileged user

In this article, I will denote the client which should be backed up WebServer. The WebServer contains all the important data that we want to keep. The BackupServer is responsible for fetching the data in a pull manner from the WebServer.

On the BackupServer

On the BackupServer, we create a key pair without a password which we can use for authenticating with the WebServer. Details about passwordless authentication are given here.

# create a password less key pair
ssh-keygen -t rsa # The keys are named rsync-backup.key.public and rsync-backup.key.private

On the WebServer

We are going to allow a user who authenticated with her private key to rsync sensitive data from our WebServer to the BackupServer, This user should have a low privileged account and still being able to backup data which belongs to other users. This capability comes with a few security threats which need to be mitigated. The standard way to backup data is rsync. The tool can be potentially dangerous, as it allows the user to write data to an arbitrary location if not handled correctly. In order to deal with this issue, a restricted version of rsync exists, which locks the usage of the tool to a declared directory: rrsync.

Obtain Rrsync

You can obtain rrsync from the developer page or extract it from your Ubuntu/Debian distribution as described here. With the following command you can download the file from the Web page and store it as executable.

sudo wget https://ftp.samba.org/pub/unpacked/rsync/support/rrsync -O /usr/bin/rrsync
sudo chmod +x /usr/bin/rrsync

Add a Backup User

First, we create a new user and verify the permissions for the SSH directory.

sudo adduser rsync-backup # Add a new user and select a strong password
su rsync-backup # change into new account
ssh rsync-backup@localhost # ssh to some location e.g.  such that the .ssh directory is created
exit
chmod go-w ~/ # Set permissions
chmod 700 ~/.ssh
chmod 600 ~/.ssh/authorized_keys

Create a Read Only of the Data You Want to Backup

I got this concept from this blog post. As we want to backup also data from other users, our backup user (rsync-backup) needs to have read access to this data. As we do not want to change the permissions for the rsync-backup user directly in the file system, we use bindfs to create read only view of the data we want to backup. We will create a virtual directory containing all the other directories that we want to backup. This directory is called `/mnt/Backups-Rsync-Readonly`` . Instead of copying all the data into that directory, which would be a waste of space, we link all the other directories into the backup folder and then sync this folder to the BackupServer.

One Time Steps:

The following steps create the directory structure for the backup and set the links to the actual data that we want backup. With this method, we neither need root, sudo or any advanced permissions. We simply create a readonly view of the data where the only user with access is rsync-backup.

sudo apt-get install acl bindfs # Install packages
sudo mkdir /mnt/Backups-Rsync-Readonly # Create the base directory
sudo chown -R rsync-backup /mnt/Backups-Rsync-Readonly # Permissions
sudo mkdir /mnt/Backups-Rsync-Readonly/VAR-WWW # Create subdirectory for /var/www data
sudo mkdir /mnt/Backups-Rsync-Readonly/MySQL-Backups # Create subdirectory for MySQL Backups
sudo setfacl -m u:rsync-backup:rx /mnt/Backups-Rsync-Readonly/ # Set Access Control List permissions for read only
sudo setfacl -m u:rsync-backup:rx /mnt/Backups-Rsync-Readonly/MySQL-Backups
sudo setfacl -m u:rsync-backup:rx /mnt/Backups-Rsync-Readonly/VAR-WWW

Testrun

In order to use these directories, we need to mount the folders. We set the permissions for bindfs and establish the link between the data and our virtual backup folders.

sudo bindfs -o perms=0000:u=rD,force-user=rsync-backup /var/www /mnt/Backups-Rsync-Readonly/VAR-WWW
sudo bindfs -o perms=0000:u=rD,force-user=rsync-backup /Backup/MySQL-Dumps /mnt/Backups-Rsync-Readonly/MySQL-Backups

These commands mount the data directories and create a view. Note that these commands are only valid until you reboot. If the above works and the rsync-backup user can access the folder, you can add the mount points to fstab to automatically mount them at boot time. Unmount the folders before you continue with sudo umount /mnt/Backups-Rsync-Readonly/*  .

Permanently Add the Virtual Folders

You can add the folders to fstab like this:

# Backup bindfs 
/var/www    /mnt/Backups-Rsync-Readonly/VAR-WWW fuse.bindfs perms=0000:u=rD,force-user=rsync-backup 0   0
/Backups/MySQL-Dumps    /mnt/Backups-Rsync-Readonly/MySQL-Backups fuse.bindfs perms=0000:u=rD,force-user=rsync-backup 0   0

Remount the directories with sudo mount -a .

Adding the Keys

In the next step we add the public key from the BackupServer to the authorized_keys file from the rsync-backup user at the WebServer. On the BackupServer, cat the public key and copy the output to the clipboard.

ssh user@backupServer
cat rsync-backup.key.public

Switch to the WebServer and login as rsync-backup user. Then add the key to the file ~/.ssh/authorized_keys.
The file now looks similar like this:

ssh-rsa AAAAB3N ............ fFiUd rsync-backup@webServer```


We then prepend the key with the only command this user should be able to execute: rrsync. We add additional limitations for increasing the security of this account. We can provide an IP address and limit the command execution further. The final file contains the following information:

command=”/usr/bin/rrsync -ro /mnt/Backups-Rsync-Readonly”,from="192.168.0.10”,no-pty,no-agent-forwarding,no-port-forwarding,no-X11-forwarding ssh-rsa AAAAB3N ………… fFiUd rsync-backup@webServer



Now whenever the user rsync-backup connects, the only possible command is rrsync. Rrsync itself is limited to the directory provided and only has read access. We also verify the IP address and restrict the source of the command.

#### Hardening SSH

Additionall we can force the rsync-backup user to use the keybased authentication only. Additionally we set the IP address restriction for all SSH connections in the sshd_config as well.

AllowUsers rsync-backup@192.168.0.10 Match User rsync-backup PasswordAuthentication no



## Backing Up

Last but not least we can run the backup. To start synching we login into the BackupServer and execute the following command. There is no need to provide paths as the only valid path is already defined in the authorized_key file.

rsync -e “ssh -i /home/backup/.ssh/rsync-backup.key.private” -aLP  –chmod=Do+w rsync-backup@webServer: .



# Conclusion

This article covers how a backup user can create backups of data owned by other users without having write access to the data. The backup is transferred securely via SSH and can run unattended. The backup user is restricted to using rrsync only and we included IP address verification. The backup user can only create backups of directories we defined earlier.

Add your Spotify / Streaming Account to the Pi Musicbox in a Secure Way With Device Passwords

In a recent article I wrote about the old Raspberry Pi, which serves its duty as my daily Web radio. The Pi MusicBox natively supports a bunch of streaming services, which improves the experience if you already have a streaming account, by providing your custom playlists on any HDMI capable hifi system. Unfortunately, the passwords are stored in plaintext, which is not a recommended practice for sensitive information. Especially if you use your Facebook credentials for services such as Spotify.

Most streaming services offer device passwords, which are restricted accounts where you can assign a dedicated username and password. Having separate credentials in the form of API keys for your devices is good practices, as it does not allow a thief to get hold of your actual account password, but only read access to your playlists. Also Spotify provides device passwords, but at the time of writing of this article, the assignment of new passwords simply did not work. A little googling revealed that the only possible way at the moment is using Facebook and its device passwords for the service. As Spotify uses Facebooks Authentication service, the services can exchange information about authorized users.

In the Settings, go to the Security panel and create a new password for apps. Name the app accordingly and provide a unique password.

Then, open the Pi MusicBox interface and add the Emailaddress you registered with facebook and provide the newly created app password.

You can then enjoy your playlists in a secure way. You will receive a warning about the connection, which is an indicator that it worked.

A Reasonable Secure, Self-Hosted Password Database with Versioning and Remote Access

The average computer users needs to memorize at least 17 passwords for private accounts. Power users need to handle several additional accounts for work too and memorizing (good and complex) passwords quickly becomes a burden if not yet impossible. To overcome the memory issue, there exists a variety of tools, which allow to store passwords and associated metadata in password stores. Typically, a password manager application consists of a password file, which contains the passwords and the metadata in structured form, and an application, assisting the user in decrypting and encrypting the passwords. A typical example is Keepass, which is an open source password management application. Keypass uses a master password in order to encrypt the password file. An additional key file can be used in order to increase security by requiring a second factor to open the password database. There exists a very large variety of ports of this software, which allow to open, edit and store passwords on virtually any platform. As the passwords are stored in a single file, a versioning mechanism is required, which allows to track changes in the passwords on all devices and merge them together in order to keep the synchronized. There also exist online services which handle versioned password storage, but obviously this requires to give away sensitive information and to trust the provider for handling the passwords safely. Storing the encrypted password file in a cloud drive such as Dropbox, Google Drive or Microsoft Azure also solves the versioning issue partially, but still the data is out there on foreign servers. For this reason, the new Raspberry Pi Zero is a low cost, low power device, which can be turned into a privately managed and reasonable secure, versioned password store under your own control.

What is needed?

  1. A Raspberry Pi (in fact, a Linux system of any kind, in this example we use a new Zero Pi)
  2. Power supply
  3. SD micro card
  4. USB Hub
  5. Wifi Dongle
  6. USB Keyboard

Preparing the Raspberry Pi Zero

The Raspbian operating system can be easily installed by dumping the image to the SD micro card. As the Zero Pi does not come with an integrated network interface, a Wifi dongle can be used for enabling wireless networking. You can edit the config file  directly on the SD card, by opening it on a different PC with any editor, and provide the SSID and the shared secret already in advance.

# File /etc/wpa_supplicant/wpa_supplicant.conf
ctrl_interface=DIR=/var/run/wpa_supplicant GROUP=netdev
update_config=1

network={
    ssid="MyWIFI
    psk="SECRET"
}

Then place the card again in the Pi and boot it with a keyboard and wifi dongle attached and the Pi connected to a screen. Boot the device and login with standard credentials, which are the user name pi and the password raspberry.

sudo adduser stefan # add new user
sudo apt-get install openssh-server git-core # Install ssh server and git
passwd # change the default password
sudo adduser stefan sudo # add the new user to the sudoers

In the next step, it is recommended to use a static IP address for the Pi, as we need to configure port forwarding for a specific IP address for the router in a later step. Open the interfaces file and provide a static IP address as follows:

# File: /etc/network/interfaces
allow-hotplug wlan0
iface wlan0 inet static
    address 192.168.0.100
    netmask 255.255.255.0
    gateway 192.168.0.1
    wpa-conf /etc/wpa_supplicant/wpa_supplicant.conf

You can then remove the HDMI cable and also the keyboard, as now SSH is available via the static IP address we just defined above.  The next step covers the installation of the [Git server][1] and the configuration of public key authentication.

sudo adduser git # add a new git user
su git # change into the git account
cd /home/git # change to home directory
mkdir .ssh # create the directory for the keys
chmod 700 .ssh # secure permissions
touch .ssh/authorized_keys  # create file for authorized keys
chmod 600 .ssh/authorized_keys # secure permissions for this file```


We are now ready to create a key pair consting of a private and a public key. You can do this on your normal pc or on the Pi directly.

ssh-keygen -t rsa # Create a key pair and provide a strong password for the private key```

Note that you can provide a file name during the procedure. The tool creates a key pair consisting of a private and a public key. The public key ends with the suffix pub.

# Folder ~/Passwordstore $ ll
insgesamt 32
drwxr-xr-x  2 stefan stefan 4096 Mär 13 22:40 .
drwxr-xr-x 10 stefan stefan 4096 Mär 13 22:38 ..
-rw-------  1 stefan stefan 1766 Mär 13 22:40 pi_git_rsa
-rw-r--r--  1 stefan stefan  402 Mär 13 22:40 pi_git_rsa.pub

If you created the key files on a different PC than the Pi, you need to upload the public key to the Pi. We can do this with the following command:

cat ~/Passwordstore/pi_git_rsa.pub | ssh git@192.168.0.100 "cat >&gt;  ~/.ssh/authorized_keys"```


If you generated the keys directly on the Pi it is sufficient to cat the key into the file directly. After you managed this step, verify that the key has been copied correctly. If the file looks similar like the following example, it worked.

git@zeropi:~/.ssh $ cat authorized_keys ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDZ7MZYvI……..wnQqchM1 stefan@pc



We can the test key based SSH authentication with the following command.

ssh -i pi_git_rsa git@192.168.0.100 # connect with using the private key```

You are then prompted to connect to the Pi by using the private key password you specified earlier. Note that this password differs from the one we created for the git user. A less secure but more convenient solution is to leave the password empty during the key pair creation process. If the password has not been set, then everyone can connect to the Pi who gets hold of the private key.  By the way, additional interesting facts about passwords can be found here.

In order to increase convenience, you can add a short cut for this connection, by editing the /home/user/.ssh/config file. Simply add the following record for the password store SSH connection.

Host passwords
hostname 192.168.0.100
port 22
user git
IdentityFile    /home/stefan/passwort/pi_git_rsa

Now you can connect to the Pi by typing the following command `ssh passwords . Note that now you need to provide the password for the key file instead of the user password. Delete the pre-installed user pi from the system:

sudo userdel pi```


The default Raspbian partition configuration only utilises 2 GB of your SD card. This can become insuficient quickly. There exists a convenient tool which allows to increase the root partition to the full size of your SD card. Simply run the following command and select the appropriate menu item.

<span class="lang:default decode:true  crayon-inline ">sudo raspi-config</span>

## Prepare the Git Repository

In the following, we create an empty git repository which we will use for versioning the password database from Keepass.

mkdir Password-Repository git@zeropi:~ $ cd Password-Repository/ git@zeropi:~/Password-Repository $ git init –bare Initialisierte leeres Git-Repository in /home/git/Password-Repository/



The repository on the Pi is now ready for ingesting the passwords.

## Checkout the new Repository on your PC and add the Password File

Now that the repository is inititalized, we can start versioning the password file with git.  Clone the repository and add the password file to git, by copying the password file into the cloned repository directory.

git clone passwords:/home/git/Password-Repository cp /home/user/oldLocation/Password-Database.kdb ~/Password-Repository git add Password-Database.kdb git commit -m “initial commit”



The last step is to push the  newly committed password file to  the remote repository. You can improve the security by not adding the key file for KeePass into the repository.

git push origin master```

The basic setup is now completed and you can clone this repository on any device, in order to have the latest password file available.

Checkout the Password Repository on Your Phone

There exists a variety of Git clients for Android, which can deal with identity files and private key authentication. I have good experience with Pocket Git. Clone the repository by using the URL like this:

ssh://git@pi.duckdns.org:1234/home/git/Password-Repository```


### Versioning the Password File: Pull, Commit and Push

Handling versions of the password file follows the standard git procedure. The only difference is, that in contrast to source code files for which git is usually used for, the encrypted password database does not allow for diffs. So you cannot find differences between to versions of the password database. For this reason, you need to make sure that you get the latest version of the password database before you edit the file. Otherwise you need to merge the file manually.  In order to avoid this, follow these steps from within the repository everytime you plan additions, edits or deletes of the password database.

  1. git pull
  2. \## make your changes
  3. git commit -m &#8220;describe your changes&#8221;
  4. git push

## Enabling Remote Access

You can already access the Git repository locally in your own network. But in order to retrieve, edit and store passwords from anywhere, you need to enable port forwarding and Dynamic DNS. Port forwarding is pretty easy. Enter your router&#8217;s Web interface, browse to the port forwarding options and specify an external and internal port which points to the IP of the Raspberry Pi.

  * IP Address 192.168.0.100
  * Internal port 22
  * External port (22100)
  * Protocol: both

Now the SSH service and therefore the Git repository becomes available via the external port 22100. As we left the internal port at the default, no changes for the SSH service are required.

For Dynamic DNS I regularly use <a href="http://www.duckdns.org" target="_blank">Duck DNS</a>, which is a free service for resolving dynamic IP addresses to a static host name. After registering for the service, you can choose a host name and download the installer. There exists an installer particularly for the <a href="https://www.duckdns.org/install.jsp" target="_blank">Raspberry Pi</a>. Follow this instructions and exchange the token and the domain name in the file to match your account.  You can now use the domain you registered for accessing the service from other machines outside your network.

## Security Improvements

The setup so far is reasonably secure, as only users having the key file and its password may authenticate with the Git repository user. It is in general good practice to disallow root to connect via SSH and to restrict remote access. Ensure that all other users on the system can only connect via SSH if and only if they use public key based authentication. Always use passwords for the key file, so that if someone should get hold of your keys, the still require a password.

You can also disable password login for the user git explicitly and allow passwords for local users. Add these lines in the sshd config file.

Match User git
PasswordAuthentication no

Match address 192.168.0.0/24 PasswordAuthentication yes



If you know the IP addresses where you will update the password file in advance, consider limiting access only to these. The git user can authenticate with the key, but still may have too many privilieges and also could execute potentially harmful commands. Ensure that the git user is not in the list of superusers:

grep -Po ‘^sudo.+:\K.*$’ /etc/group```

The user git should not be in the output list. In order to limit the commands that the git user may execute, we can specify a list of allowed commands executable via SSH and utilise a specialised shell, which only permits git commands. Prepend the public key of the git user in the authorised_keys file as follows:

no-port-forwarding,no-agent-forwarding ssh-rsa AAAAB ........```


In addition, we can change the default shell for the user git. Switch to a different user account with sudo privileges and issue the following command:

sudo usermod -s /usr/bin/git-shell git```

This special shell is called git-shell and comes with the git installation automatically. It only permits git specific commands, such as push and pull, which is sufficient for our purpose. If you now connect to the Pi with the standard SSH command, the connection will be refused:

stefan $ ssh passwords 
Enter passphrase for key '/home/passwort/pi_git_rsa': 

The programs included with the Debian GNU/Linux system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
Last login: Sun Mar 20 20:48:04 2016 from 192.168.0.13
fatal: Interactive git shell is not enabled.
hint: ~/git-shell-commands should exist and have read and execute access.
Connection to 192.168.0.100 closed.

Firewall

The Uncomplicated FireWall (ufw) is way less comlex to setup than classic IP tables and provides exactly what the name implies: a simple firewall. You can install and initialize it as follows:

sudo apt-get install ufw # Insatall it
sudo ufw default deny incoming # Deny all incoming traffic 
sudo ufw allow ssh # Only allow incoming SSH
sudo ufw allow out 80 # Allow outgoing port 80 for the Duck DNS request
sudo ufw enable # Switch it on
sudo ufw status verbose # Verify status```


The great tutorials at <a href="https://www.digitalocean.com/community/tutorials/how-to-setup-a-firewall-with-ufw-on-an-ubuntu-and-debian-cloud-server" target="_blank">Digital Ocean</a> provide more details.

## Conclusion

In this little tutorial. we installed a Git server on a Raspberry Pi Zero (or any other Linux machine) and created a dedicated user for connecting to the service. The user requires a private key to access the service and the git server only permits key based logins from users other than users from the local network. The git user may only use a restricted shell and cannot login interactively. The password file is encrypted and all versions of the passwords are stored within the git repository.



<div class="twttr_buttons">
  <div class="twttr_twitter">
    <a href="http://twitter.com/share?text=A+Reasonable+Secure%2C+Self-Hosted+Password+Database+with+Versioning+and+Remote+Access" class="twitter-share-button" data-via="" data-hashtags=""  data-size="default" data-url="https://blog.stefanproell.at/2016/03/20/a-reasonable-secure-password-database-with-versioning-and-remote-access/"  data-related="" target="_blank">Tweet</a>
  </div>
  
  <div class="twttr_followme">
    <a href="https://twitter.com/@stefanproell" class="twitter-follow-button" data-show-count="true" data-size="default"  data-show-screen-name="false"  target="_blank">Follow me</a>
  </div>
</div>

 [1]: https://git-scm.com/book/en/v2/Git-on-the-Server-Setting-Up-the-Server