Getting Familiar with Eclipse Again: Git Integration in Comparison with IntelliJ IDEA

Eclipse and IntelliJ are both great Java IDEs, which have their own communities, advantages and disadvantages. After having spent a few years in JetBrains IntelliJ Community Edition, I got accustomed to the tight and clean Git integration into the user interface. Now I consider switching back up Eclipse, I stumbled over a few things that I try to describe in this post.

IntelliJ and Eclipse Handle Project Structures Differently

Eclipse utilises a workspace concept, which allows to work on several projects at the same time. IntelliJ in contrast allows only one open project and organizes substructures in modules. A comparison of these concepts can be found here. These two different viewpoints have effects on the way how Git is integrated into the workflow.

Sharing Projects

The different views on project structures of both IDEs imply that Git repositories are also treated differently. While IntelliJ utilises the root of a repository directly, Eclipse introduces a subfolder for the project. This leads to the interesting observation that importing a project from Git again into an Eclipse workspace requires a small adaption in order to let Eclipse recognize the structure again.

A Small Workflow

In order to get familiar again with Eclipse, I created a small test project, which I then shared by pushing it to a Git repository. I then deleted the project and tried to re-import it again.

Step 1

Create new test project. In this case a Spring Boot Project, which works with Maven. Note that the new project is stored in the Eclipse workspace.

Step 2

As the second step, we create a new repository. Login into Github or your Gitlab instance and create a new project. Initialize it so that you have a master branch ready and copy the URL of the repository. We will then add this repository by opening the Git Repository perspective in Eclipse and add the repository. You can provide a default location for your local repositories in the Eclipse -> Team -> Git properties. In the Git Repository perspective, you can then see the path of the local storage location and some information about the repository, for instance that the local and the master branch are identical (they have the same commit hash). Note that the Git path is different than your workspace project path.

Step 3

We now have a fresh Java Maven based project in our Eclipse workspace and an empty Git repository in the default location for Git repositories in a different location. What we need to do next is to share the source code, by moving it into the Git storage location and add it to the Git index. Eclipse can help us with that,, by using the Team->Share menu from the Project Explorer view, when right clicking on the project.

Step 4

In the next step, we need to specify the Git repository we want to push our code to. This is an easy step, simply select the repository we just created from the drop down menu. In the menu you can see that the content of the current project location on the left side will be moved to the right side. Eclipse created a new subfolder within the repository for our project. This is something that IntelliJ would not do.

In this step, eclipse separates the local and custom project metadata from the actual source code, which is a good thing.

Step 5

In the fifth step, we simply apply some changes and commit and push them to the remote repository using the git staging window.

After this step, the changes are visible in the remote repository as well and available to collaborators. In order to simulate someone, who is going to checkout our little project from Gitlab, we delete the project locally and also remove the git repository.

Step 6

Now we start off with a clean workspace and clone the project from Gitlab. In the git repositories window, we can select clone project and provide the same URL again.

Step 7

In the next screen, we select the local destination for the cloned project. This could be for instance the default directory for Git projects or any other location on your disk. Confirm this dialogue.

Step 8

Now comes the tricky part, which did not work as expected in Eclipse Neon 4.6.1. Usually, one would tell Eclipse, that the cloned project is a Maven project, and it should detect the pom.xml file and download the dependencies. Todo so, we would select Import-> Git -> Projects from Git and clone the repository from this dialogue. Then, as a next step, we would select the Configure -> Convert to Maven Project option, but Eclipse does not seem to recognize the Maven structure. It would only show the files and directories, but not consider the Maven dependencies specified in the pom.xml file.

What happens is that Eclipse tries to add a new pom.xml file and ignores the actual one.

Of course this is a problem and does not work.

Step 9 – Solution

Instead of using the method above, just clone the repository from the Git Repository perspective and then go back to the Project Explorer. Now instead of importing the project via the Git menu, chose the existing Maven project and select the path of the Git repository we cloned before.

And in the next dialogue, specify the path:

As you can see, now Eclipse found the correct pom.xml file and provides the correct dependencies and structure!

Conclusion

Which IDE you prefer is a matter of taste and habit. Both environments do provide a lot of features for developers and differ in the implementation of these features. With this short article, we tried to understand some basic implications of the two philosophies how Eclipse and IntelliJ handle project structures. Once we anticipate these differences, it becomes easy to work with both of them.

A MySQL 5.7 Cluster Based on Ubuntu 16.04 LTS – Part 2

In a recent article, I described how to setup a basic MySQL Cluster with two data nodes and a combined SQL and management node. In this article, I am going to highlight a hew more things and we are going to adapt the cluster a little bit.

Using Hostnames

For making our lives easier, we can use hostnames which are easier to remember than IP addresses. Hostnames can be specified for each VM in the file /etc/hosts. For each request to the hostname, the operating system will lookup the corresponding IP address. We need to change this file on all three nodes to the following example:

A MySQL 5.7 Cluster Based on Ubuntu 16.04 LTS – Part 1

A Cluster Scenario

In this example we create the smallest possible MySQL cluster based on four nodes running on three machines. Node 1 will run the cluster management software, Node 2 and Node 3 will serve as dats nodes and Node 4 is the MySQSL API, which runs on the same VM on Node 1.

Das Phrasensammelsurium

Wer viel liest, dem stechen sie unweigerlich ins Auge: sinnbefreite und nervige Phrasen, die sich wie aus dem Nichts in verschiedenen Medien erscheinen und sich plötzlich überall ausbreiten. Journalisten, Autoren und Wissenschafter sind meist Vielleser und machen sich – ganz unbewusst – Ausdrucksweisen, einzelne Begriffe und ganze Phrasen zu eigen. Diese übernehmen sie dann in ihren eigenen Wortschatz und das Drama nimmt seinen Lauf.

Aufgrund mangelnden Detailwissens meinerseits möchte und kann ich gar nicht auf die sprachwissenschaftlichen Hintergründe eingehen. Man kann sich aber beispielsweise mit Hilfe des Google Ngram Viewers ansehen, wann bestimmte Begriffe im deutschen Buch-Korpus auftauchen. Leider gibt es diese Daten für die deutsche Sprache nur bis einschließlich 2008, weswegen ganz aktuelle Phrasen noch nicht enthalten sind. Abgebildet ist ein Beispiel für das Wort Narrativ, das nun allenthalben herhalten muss.

Selbstverständlich gibt es verschiedenste Projekte, wie beispielsweise die Floskelwoche oder dieser Artikel im Österreichischen Journalist, die sich ganz diesem Thema verschrieben haben. Ich möchte hier dennoch meine persönliche Liste nervtötender Phrasen und Füllwörter in mehr oder weniger alphabetischer Reihenfolge festhalten. Einsendungen sind sehr willkommen.

  • “Aber nun der Reihe nach …”
  • Alternativlos
  • “Am Ende des Tages”
  • “Ganz einem Thema verschrieben”
  • Inflationär
  • “X kann Y”. Beispiel: “Karl-Heinz kann Social Media”
  • Narrativ
  • “X neu denken”.
  • Postfaktisch
  • “So muss X”. Beispiel: “So muss Technik
  • Spannend!
  • “Wir sind X”. Beispiel: “Wir sind Papst

Phrase einreichen

Parsing SQL Statements

JDBC and the Limits of ResultSet Metadata

For my work in the area of [data citation][1], I need to analyse queries, which are used for creating subsets. I am particularly interested in query parameters, sortings and filters. One of the most commonly used query languages is SQL, which is used by many relational database management systems such as MySQL. In some cases, the interaction with databases is abstract, meaning that there is hardly any SQL statements executed directly. The SQL statements are rather built on the fly by object relational mappers such as Hibernate. Other scenarios use SQL statements as String and also prepared statements, which are executed via JDBC. However,  analysing SQL statements is tricky as the language is very flexible.

In order to understand what columns have been selected, it is sufficient to utilise the ResultSet Metadata and retrieve the column names from there. In my case I need to extract this imformation from the query in advance and potentially enforce a specific sorting by adding columns to the ORDER BY clause. In this scenario, I need to parse the SQL statement and retrieve this information from the statement itself. Probably the best way to do this would be to implement a parser for the SQL dialect with [ANTLR][2] (ANother Tool for Language Recognition). But this is quite a challenge, so I decided to take a shortcut: FoundationDB.

The FoundationDB Parser

[FoundationDB][3] was a NoSQL database which provided several layers for supporting different paradigms at once. I am using past tense here, because the project got [acquired by Apple in 2015][4] and since then does pursue the open source project any more. However, the Maven libraries for the software are still available at [Maven Central][5]. FoundationDB uses its own SQL parser, which understands standard SQL queries. These queries can be interpreted as a tree and the parser library allows traversing SQL statements and analyse the nodes. We can use this tree to parse and interpret SQL statements and extract additional information.

The Foundations of FoundationDB

The FoundationDB parser can be included into your own project with the following Maven dependency:

An Interactive Map with Leaflet, GeoJSON, and jQuery Using Bootstrap

A Side Project – An Interactive Parking Map of Innsbruck

When I recently moved to Innsbruck, I noticed that there was no interactive map for the parking system available. The amount of time you can park your car depends on the zone your car is located in. There are 20 parking zones and they are defined by their bordering streets in the city.  Innsbruck is very dense and parking is always a hot topic. So I thought having an interactive map makes it easier to find zones where you can leave your car longer, also if your are not so familiar with the street names. The city of Innsbruck offers the GIS data at the open data portal of Austria, which made it quite easy to implement such a map. I used the following technologies for creating this map:

The source code is available at my Github account. The implementation is available here and also at the Austrian Open Data Portal.

Code Snippets

Explaining the whole source code would be a bit too much for this post and most of it is pretty self explanatory, but in the following section I would like to highlight a few things of the project.

Initializing

In the top of the HTML file, we load the JavaScript file which contains all the functions and variables for our implementation. The script is called parkraum.js (parkraum means parking space in German).

The initialization of the Javascript code is straight forward with jQuery. I structured the initialization into a few components, as you can see in the following example.

<script>
    $(document).ready(function() {
	initMap();
	placeZonesOnMap();
	loadScrolling();
	populateParkzoneDropdown();
    });
</script>

We first initialize the Map itself, then place the parking zones, initialize the page scrolling to make it more smoothly and then populate the drop down menu with the available parking zones.

Initialize the Map

The first step is the initialization of the map with Leaflet. Note that map is a global variable. The coordinates [47.2685694, 11.3932759] are the center of Innsbruck annd 14 is the zoom level. In order to offer the map on a public page, you need to register with mapbox, a service which provides the tiles for the map. Mapbox is free for 50k map views per month. Make sure to use your own key and show some attribution.

// Initialize map and add a legend
function initMap() {
    map = L.map('map').setView([47.2685694, 11.3932759], 14);
    tileLayer = L.tileLayer('https://api.tiles.mapbox.com/v4/{id}/{z}/{x}/{y}.png?access_token=YOUR_API_KEY, {
        maxZoom: 18,
        attribution: 'Map data &copy; <a href="http://openstreetmap.org">OpenStreetMap&lt;/a&gt; contributors, ' +
            '<a href="http://creativecommons.org/licenses/by-sa/2.0/">CC-BY-SA&lt;/a&gt;, ' +
            'Imagery © <a href="http://mapbox.com">Mapbox&lt;/a&gt;',
        id: 'mapbox.light'
    }).addTo(map);

    addLegend();
}

We also add a legend to the map, for indicating the parking area type with colors. There exist 4 types of parking areas and we just add little squares with that colors to the map. The legend improves the readability of the map.

/* Add a legend to the map. */
function addLegend(){
    var legend = L.control({position: 'bottomright'});
    legend.onAdd = function (map) {

    var div = L.DomUtil.create('div', 'info legend');
    div.innerHTML =
        '<i style="background:#72B2FF";>&lt;/i&gt;&lt;span style="font-weight: 600;"&gt;90 Minuten Kurzparkzone&lt;/span&gt;&lt;br&gt;' +
        '<i style="background:#BEE7FF";>&lt;/i&gt;&lt;span style="font-weight: 600;"&gt;180 Minuten Kurzparkzone&lt;/span&gt;&lt;br&gt;' +
        '<i style="background:#A3FF72";>&lt;/i&gt;&lt;span style="font-weight: 600;"&gt;Werktags Parkstraße&lt;/span&gt;&lt;br&gt;' +
        '<i style="background:#D8D0F4";>&lt;/i&gt;&lt;span style="font-weight: 600;"&gt;Parkstraße&lt;/span&gt;&lt;br&gt;';

    return div;
};

legend.addTo(map);
}

The final map with the legend looks like this:

Reading Data, Adding Layers

I obtained the shapefiles with the geographic information from the Austrian Open Data Portal. The data seems to be exported from ArcGIS and I manually converted it into GeoJSON, which is directly supported by Leaflet.js without any plugins. To do this, I just copied the polygon data into the GeoJSON structure. I also separated the quite large file into smaller junks, each parking zone in one file.

Below you can see an example how the JSON looks like for the parking zone C. The structure contains the short name (“C”), some additional information about the zone, attributes for the color, opacity and outline and of course the actual coordinates, which make up a polygon covering the parking area.

{
    "type": "Feature",
    "properties": {
        "parkzonenKuerzel": "C",
        "parkzoneInfo":"Kurzparkzone 90 min gebührenpflichtig, werktags Mo-Fr von 9-21 Uhr und Sa von 9-13 Uhr, ½ Stunde EUR 0.70, 1 Stunde EUR 1.40, 1½ Stunden EUR 2.10.",
        "style": {
            "weight": 2,
            "color": "#999",
            "opacity": 1,
            "fillColor": "#72B2FF",
            "fillOpacity": 0.5
        }
    },
    "geometry":{
        "type":"Polygon",
        "coordinates":[
            [
                [11.389460535000069,47.261162269000067],[11.389526044000036,47.261021078000056],
                ...
          ]
        ]
    }
}

I created a small object containing the name of a parking zone, the relative path of the JSON file and a place holder for layer information.

var parkzonen = [{
    parkzone: 'C',
    jsonFile: './data/zoneC.json',
    layer: ''
}, {
    parkzone: 'D',
    jsonFile: './data/zoneD.json',
    layer: ''
}
...
}];

This is globally available inside the JS file. The actual loading of the parking zones and the placement of the polygones on the map is happening in the following function. It uses jQuery to load the JSON files. Note that jQuery expects the files to be delivered by a Web server. So in order for this to work, you need to make sure that the files can be served by a Web server , also on your local development machine. You can try this very easy, if you execute the following python statement within the root directory of your local development directory:``` sudo http-server -p 80  .

function placeZonesOnMap() {
    for (var zone in parkzonen) {
        var parkzonenKuerzel = parkzonen[zone].parkzone;
        var jsonURL = parkzonen[zone].jsonFile;
        $.ajaxSetup({
            beforeSend: function(xhr) {
                if (xhr.overrideMimeType) {
                    xhr.overrideMimeType("application/json");
                }
            }
        });

        $.getJSON(jsonURL, function(data) {

            placeZoneOnMap(data);

        });

    }
}
// Place zone on map
function placeZoneOnMap(data) {
    var parkzonenKuerzel = data.properties.parkzonenKuerzel;


    for (var zone in parkzonen) {
        if(parkzonen[zone].parkzone===parkzonenKuerzel){
            var layer = addGeoJSONToMap(data);
            parkzonen[zone].layer= layer;
        }
    }
}

function addGeoJSONToMap(data){
    var layer = L.geoJson([data], {
        style: function(feature) {
            return feature.properties && feature.properties.style;
        },
        onEachFeature: onEachFeature,
    }).addTo(map);
    return layer;
}

function onEachFeature(feature, layer) {
    var popupContent = 'Parkzone: <span style="font-weight: 900; font-size: 150%;">' + feature.properties.parkzonenKuerzel + '&lt;/span&gt;&lt;/br&gt;' + feature.properties.parkzoneInfo;

    layer.bindPopup(popupContent);
    var label = L.marker(layer.getBounds().getCenter(), {
        icon: L.divIcon({
            className: 'label',
            html: '<span style="font-weight: 900; font-size: 200%;color:black;">' + feature.properties.parkzonenKuerzel+'&lt;/span&gt;',
            iconSize: [100, 40]
        })
    }).addTo(map);
}```


The for loop iterates over the object, where we stored all the parking zones, respectively the JSON file locations. We load the files, one by one, and place the polygons on the map. This of course works in an asynchronous fashion. After this step, the polygons become visible on the map. We also add a clickable info box on all parking zones, which then display additional information.

## Dropdown Selection for Marking and Highlighting a Parking Zone

Users are able to select one of the parking zones from a drop down list.  In the first step, we add all parking zones by iterating over the parking zones object. Once selected, the parking zone will change the color and therefore be highlighted. To do that, we remove the layer of the parking zone and add it again in a different color.

function populateParkzoneDropdown(){ $('#selectParkzone’).empty(); $('#selectParkzone’).append($(‘</option>').val(‘Bitte Auswählen’).html(‘Zonen’)); $.each(parkzonen, function(i, p) { $('#selectParkzone’).append($(‘</option>').val(p.parkzone).html(p.parkzone)); });

}

$("#selectParkzone”).change(function () { var selectedParkZone = $("#selectParkzone”).val(); changeParkzoneColor(selectedParkZone);

});

function changeParkzoneColor (selectParkzone){ resetMap();

for (var zone in parkzonen) {
    if(parkzonen[zone].parkzone==selectParkzone){
        var layer = parkzonen[zone].layer;

        map.removeLayer(layer);
        layer.setStyle({
            fillColor: 'red',
            fillOpacity: 0.7
        });
        map.addLayer(layer);
    }
}

}

// Reset all layers function resetMap(){     console.log(‘reset’);     map.eachLayer(function (layer) {         map.removeLayer(layer);     });     map.addLayer(tileLayer);     placeZonesOnMap(); }



## Show the Current Location

Showing the current location is also a nice feature. By clicking on a button, the map scrolls to the current location, which is transmitted by the browser. Note that this only works if you deliver your pages with HTTPS!

function currentPosition() { if (navigator.geolocation) { navigator.geolocation.getCurrentPosition(function(position) { latit = position.coords.latitude; longit = position.coords.longitude; //console.log(‘Current position: ' + latit + ' ' + longit); //alert(‘Current position: ' + latit + ' ' + longit); // this is just a marker placed in that position var abc = L.marker([position.coords.latitude, position.coords.longitude]).addTo(map); // move the map to have the location in its center map.panTo(new L.LatLng(latit, longit)); }); } }```

The single page app utilises bootstrap for rendering the content nicely and providing the navigation features. The following code snippet show how we can make all links scroll smoothly.

// use the first element that is "scrollable"
function scrollableElement(els) {
    for (var i = 0, argLength = arguments.length; i < argLength; i++) {
        var el = arguments[i],
            $scrollElement = $(el);
        if ($scrollElement.scrollTop() > 0) {
            return el;
        } else {
            $scrollElement.scrollTop(1);
            var isScrollable = $scrollElement.scrollTop() > 0;
            $scrollElement.scrollTop(0);
            if (isScrollable) {
                return el;
            }
        }
    }
    return [];
}

// Filter all links 
function filterPath(string) {
    return string
        .replace(/^\//, '')
        .replace(/(index|default).[a-zA-Z]{3,4}$/, '')
        .replace(/\/$/, '');
}

function loadScrolling() {
    var locationPath = filterPath(location.pathname);
    var scrollElem = scrollableElement('html', 'body');

    $('a[href*=\\#]').each(function() {
        var thisPath = filterPath(this.pathname) || locationPath;
        if (locationPath == thisPath &&
            (location.hostname == this.hostname || !this.hostname) &&
            this.hash.replace(/#/, '')) {
            var $target = $(this.hash),
                target = this.hash;
            if (target) {
                var targetOffset = $target.offset().top;
                $(this).click(function(event) {
                    event.preventDefault();
                    $(scrollElem).animate({
                        scrollTop: targetOffset
                    }, 400, function() {
                        location.hash = target;
                    });
                });
            }
        }
    });
}

WAMS Containerstandorte auf einer interaktiven Karte

Der Verein WAMS ist ein Sozialbetrieb mit dem Ziel Arbeitsplätze für Menschen zu schaffen, die aufgrund ihrer besonderen Lebenssituationen im konventionellen Arbeitsmarkt benachteiligt werden. Ein besonderes Augenmerk des Vereins liegt auch auf dem Recyclinggedanken und der Wiederverwertung von Ressourcen. Aus diesem Grund betreut und betreibt der Verein auch Altkleidersammelstellen und verwertet das gespendete Gewand. Eine Liste der Standorte dieser gelben Container befindet sich im Flyer von WAMS, der auf der Vereinshomepage bezogen werden kann. Für all jene, denen die Innsbrucker Straßennamen noch nicht allzu viel sagen oder deren geographisches Gedächtnis lückenhaft ist, habe ich die Standorte auf einer Karte eingetragen. Der Quellcode ist auf Github verfügbar und hier beschrieben.

Das Innsbrucker Parkzonensystem

Der Innsbrucker Parkraum ist in 20 Parkzonen unterteilt. In diesen Parkzonen ist das Parken zu unterschiedlichen Zeiten gebührenpflichtig. Innsbrucker mit Hauptwohnsitz können, sofern die Voraussetzungen gegeben sind, eine Anwohnerparkkarte beantragen. Der statische Parkzonenplan der Stadt Innsbruck befindet sich hier und einen interaktiven Parkzonenplan habe ich hier erstellt. Ein Klick auf das Bild bringt Sie dort hin. Sie finden diese Webapplikation auch im offiziellen Anwendungskatalog der Open Government Data Initiative.

Open Data

Die Daten für das Kartenoverlay basieren auf dem Datensatz Parkzonen in Innsbruck der Stadt Innsbruck. Die Daten habe ich vom Open Government Data Catalog hier bezogen und in das GeoJSON Format konvertiert. Datenstand ist der 03.02.2016, der eindeutige Datenidentifikator lautet 8ffd16df-e7b8-423b-8348-199e6a5bf0ca.

Open Source Software

Diese Seite wurde mit Open Source Software erstellt. Das Kartenmaterial kommt von Open Street Map, die Kartenbilder von Mapbox und die Zonenoverlays habe ich mit Leaflet JS realisiert. Das Layout und Design ist mit Bootstrap gemacht und die interaktiven Elemente sind mit jQuery implementiert.

Sollsteinhaus

Das Sollsteinhaus befindet sich oberhalb von Hochzirl und lässt sich öffentlich sehr gut erreichen. Mit der S5 geht es vom Haupt- oder Westbahnhof in einer guten Viertelstunde bis zum Bahnhof Hochzirl. Von Innsbruck kommend kann man direkt durch das am (Zug-) Ende des nördlichen (bergseitigen) Bahnsteig gelegene Gatter gehen und man befindet sich bereits am gut ausgeschilderten Wanderweg 213 zum Sollsteinhaus. Zu Beginn geht es gemütlich durch den Wald und nach wenigen Minuten erreicht man eine Forststraße. Diese gestaltet sich etwas steil und kräftezehrend. Dieser Forststraße folgt man etwa 1.5 Stunden entland der sehr guten Beschilderung, bis man die Materialseilbahn des Sollsteinhauses erreicht. Nach einer weiteren halben Stunde erreicht man die private Solnalm, die man nicht fälschlicherweise schon für das ziel halten sollte. Weiter geht es wieder eintlang schmaler Wege, durch ein Bachbett und in wenigen Serpentinen hinauf bis zum Sollsteinhaus auf 1805m Seehöhe. Leider wird das Sollsteinhaus seit 25.09.2016 (Stand 27.09.2016) renoviert und ist daher geschlossen. Die Wanderung ist auch beim Portal Almenrausch gut beschrieben. Die Gesamtwanderzeit betrug bis zum Sollsteinhaus etwa 2.5 Stunden.

Create a Category Page for one Specific Category and Exclude this Category from the Main Page in WordPress

This blog serves as my digital notebook for more than eight years and I use to to collect all sorts of things, that I think are worth storing and sharing. Mainly, I blog about tiny technical bits, but recently I also started to write about my life here in Innsbruck, where I try to discover what this small city and its surroundings has to offer. The technical articles are written in English, as naturally the majority of visitors understands this language. The local posts are in German for the same reason. My intention was to separate this two topics in the blog and lot let the posts create any clutter between languages.

Child Themes

When tinkering with the code of your WordPress blog, it is strongly recommended to deploy and use a child theme. This allows to reverse changes easily and more importantly, allows to update the theme without having to re-implement your adaptions after each update. Creating a child theme is very easy and described here. In addition I would recommend using some sort of code versioning tool, such as Git.

Excluding a Category from the Main Page

WordPress offers user defined categories out of the box and category pages for each category. This model does not fit well for my blog, where I have static pages and a time series of blog posts on the main page. In order to prevent that the posts about Innsbruck show up at the main page, the category ‘Innsbruck’ needs to be excluded. We can create or modify the file functions.php in the child theme folder and add the following code.

<?php
add_action( 'wp_enqueue_scripts', 'theme_enqueue_styles' );
function theme_enqueue_styles() {
        wp_enqueue_style( 'parent-style', get_template_directory_uri() . '/style.css' );

}

// Exclude Innsbruck Category from Main Page
function exclude_category($query) {
     if ( $query->is_home() ) {
         // Get the category ID of the category Insbruck
         $innsbruckCategory = get_cat_ID( 'Innsbruck' );
         // Add a minus in front of the string
         $query->set('cat','-' . $innsbruckCategory);

      }
     return $query;
}
add_filter('pre_get_posts', 'exclude_category');

?>

This adds a filter which gets executed before the posts are collected. We omit all posts of the category Innsbruck, by adding a minus as prefix of the category ID. Of course you could also lookup the category ID in the administration dashboard, by hovering with your mouse over the category name and save one database query.

A Custom Page Specific for one Category

In the second step, create a new page in the dashboard. This page will contain all posts of the Innsbruck category that we will publish. Create the file page.php in your child theme folder and use the following code:

<?php
/**
 * The template for displaying all pages.
 *
 * This is the template that displays all pages by default.
 * Please note that this is the WordPress construct of pages
 * and that other 'pages' on your WordPress site will use a
 * different template.
 *
 * @package dazzling
 */

    get_header();
?>
    <div id="primary" class="content-area col-sm-12 col-md-8">
        <main id="main" class="site-main" role="main">

<?php
     // Specify the arguments for the post query
     $args = array(
        'cat' => '91', // Innsbruck category id
        'post_type' => 'post',
        'posts_per_page' => 5,
        'paged' => ( get_query_var('paged') ? get_query_var('paged') : 1),
    );

    if( is_page( 'innsbruck' )) {
        query_posts($args);
    }
?>

<?php while ( have_posts() ) : the_post(); ?>
    <?php get_template_part( 'content', 'post' ); ?>
    <?php
        // If comments are open or we have at least one comment, load up the comment template
        if ( comments_open() || '0' != get_comments_number() ) :
            comments_template();
        endif;
    ?>
<?php endwhile; // end of the loop. ?>


<div class="navigation">
    <div class="alignleft">&lt;?php next_posts_link('&laquo; Ältere Beiträge') ?&gt;&lt;/div&gt;
    <div class="alignright">&lt;?php previous_posts_link('Neuere Beiträge &raquo;') ?&gt;&lt;/div&gt;
</div>


    </main>&lt;!-- #main --&gt;
</div>&lt;!-- #primary --&gt;
<?php get_sidebar(); ?>
<?php get_footer(); ?>

In this code snippet, we define a set of arguments, which are used for filtering the posts of the desired category. In this example, I used the id of the Innsbruck category (91) directly. We define that we want to display posts only, 5 per page. An important aspect is the pagination. When we only display posts of one category, we need to make sure that Worpress counts the pages correctly. Otherwise the page would always display the same posts, regardless how often the user clicks on the next page button. The reason is that this button uses the global paged variable, which is set correctly in the example above.

The if conditional makes sure that only the pages from the Innsbruck category are displayed. The while loop then iterates over all posts and displays them. At the bottom we can see the navigational buttons for older and newer posts of the Innsbruck category.

Persistent Data in a MySQL Docker Container

Running MySQL in Docker

In a recent article on Docker in this blog, we presented some basics for dealing with data in containers. This article will present another popular application for Docker: MySQL containers. Running MySQL instances in Docker allows isolating database infrastructure with ease.

Connecting to the Standard MySQL Container

The description of the MySQL docker image provides a lot of useful information how to launch and connect to a MySQL container. The first step is to create standard MySQL container from the latest available image.

sudo docker run \
   --name=mysql-instance 
   -e MYSQL_ROOT_PASSWORD=secret 
   -p 3307:3306 
   -d 
   mysql:latest

This creates a MySQL container where the root password is set to secret. As the host is already running its own MySQL instance (which has nothing to do with this docker example), the standard port 3306 is already taken. Thus we publish utilise the port 3307 on the host system and forward it to the 3306 standard port from the container.

Connect from the Host

We can then connect from the command line like this:

mysql -uroot -psecret -h 127.0.0.1 -P3307

We could also provide the hostname localhost for connecting to the container, but as the MySQL client per default assumes that a localhost connection is via a socket, this would not work. Thus when using the hostname localhost, we needed to specify the protocol TCP, wo that the client connects via the network interface.

mysql -uroot -psecret -h localhost --protocol TCP -P3307

Connect from other Containers

Connecting from a different container to the MySQL container is pretty straight forward. Docker allows to link two containers and then use the exposed ports between them. The following command creates a new ubuntu container and links to the MySQL container.

sudo docker run -it --name ubuntu-container --link mysql-instance:mysql-link ubuntu:16.10 bash

After this command, you are in the terminal of the Ubuntu container. We then need to install the MySQL client for testing:

# Fetch the package list
root@7a44b3e7b088:/# apt-get update
# Install the client
root@7a44b3e7b088:/# apt-get install mysql-client
# Show environment variables
root@7a44b3e7b088:/# env

The last command gives you a list of environment variables, among which is the IP address and port of the MySQL container.

MYSQL_LINK_NAME=/ubuntu-container/mysql-link
HOSTNAME=7a44b3e7b088
TERM=xterm
MYSQL_LINK_ENV_MYSQL_VERSION=5.7.14-1debian8
MYSQL_LINK_PORT=tcp://172.17.0.2:3306
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
MYSQL_LINK_PORT_3306_TCP_ADDR=172.17.0.2
MYSQL_LINK_PORT_3306_TCP=tcp://172.17.0.2:3306
PWD=/
MYSQL_LINK_PORT_3306_TCP_PORT=3306
SHLVL=1
HOME=/root
MYSQL_LINK_ENV_MYSQL_MAJOR=5.7
MYSQL_LINK_PORT_3306_TCP_PROTO=tcp
MYSQL_LINK_ENV_GOSU_VERSION=1.7
MYSQL_LINK_ENV_MYSQL_ROOT_PASSWORD=secret
_=/usr/bin/env

You can then connect either manually of by providing the variables

mysql -uroot -psecret -h 172.17.0.2
mysql -uroot -p$MYSQL_LINK_ENV_MYSQL_ROOT_PASSWORD -h $MYSQL_LINK_PORT_3306_TCP_ADDR -P $MYSQL_LINK_PORT_3306_TCP_PORT

If you only require a MySQL client inside a container, simply use the MySQL image from docker. Batteries included!

Persistent Docker Containers

Docker Fundamentals

Docker has become a very popular tool for orchestrating services. Docker it much more lightweight than virtual machines. For instance do containers not require a boot process. Docker follows the philosophy that one container serves only one process. So in contrast to virtual machines which often bundle several services together, Docker is built for running single services per container. If you come from the world of virtualised machines, Docker can be a bit confusing in the beginning, because it uses its own terminology. A good point to start is as always the documentation and there are plenty of great tutorials out there.

Images and Containers

Docker images serves as templates for the containers. As images and containers both have hexadecimal ids they are very easy to confuse. The following example shows step by step how to create a new container based on the Debian image and how to open shell access.

# Create a new docker container based on the debian image
sudo docker create -t --name debian-test debian:stable 
# Start the container
sudo docker start  debian-test
# Check if the container is running
sudo docker ps -a
# Execute bash to get an interactive shell
sudo docker exec -i -t debian-test bash

A shorter variant of creating and launching a new container is listed below. The run command creates a new container and starts it automatically. Be aware that this creates a new container every time, so assigning a container name helps with not confusing the image with the container. The command run is in particular tricky, as you would expect it to run (i.e. launch) a container only. In fact, it creates a new one and starts it.

sudo docker run -it --name debian-test debian:stable bash

Important Commands

The following listing shows the most important commands:

# Show container status
sudo docker ps -a
# List available images
sudo docker images 
# Start or stop a container
sudo docker start CONTAINERNAME
sudo docker stop CONTAINERNAME
# Delete a container
sudo docker rm CONTAINERNAME

You can of course create your own images, which will not be discussed in this blog post. It is just important to know that you can’t move containers from your host so some other machine directly. You would need to commit the changes made to the image and create a new container based on that image. Please be aware that this does not include the actual data stored in that container! You need to manually export any data and files from the original container and import it in the new container again. This is another trap worth noting. You can, however,  also mount data in the image, if the data is available at the host at the time of image creation. Details on data in containers can be found here.

Persisting Data Across Containers

The way how Docker persists data needs getting used to in the beginning, especially as it is easy to confuse images with containers. Remember that Docker images serve only as the template. So when you issue the command sudo docker run …  this actually creates a container from an image first and then starts it. So whenever you issue this command again, you will end up with a new container which does share any data with the previously created container.

Docker 1.9 introduced data volume containers, which allow to create dedicated data containers which can be used from several other containers. Data volume containers can be used for persisting data. The following listing shows how to create a data volume container and mount the volume in a container.

# Create a data volume
sudo docker volume create --name data-volume-test
# List all volumes
sudo docker volume ls
# Delete the container
sudo docker rm debian-test
# Create a new container, now with the data volume 
sudo docker create -v data-volume-test:/test-data -t --name debian-test debian:stable
# Start the container
sudo docker start debian-test
# Get the shell
sudo docker exec -i -t debian-test bash

After we logged into the shell, we can see the data volume we mounted on the directory test-data:

root@d4ac8c89437f:/# ls -la
total 76
drwxr-xr-x  28 root root 4096 Aug  3 13:11 .
drwxr-xr-x  28 root root 4096 Aug  3 13:11 ..
-rwxr-xr-x   1 root root    0 Aug  3 13:10 .dockerenv
drwxr-xr-x   2 root root 4096 Jul 27 20:03 bin
drwxr-xr-x   2 root root 4096 May 30 04:18 boot
drwxr-xr-x   5 root root  380 Aug  3 13:11 dev
drwxr-xr-x  41 root root 4096 Aug  3 13:10 etc
drwxr-xr-x   2 root root 4096 May 30 04:18 home
drwxr-xr-x   9 root root 4096 Nov 27  2014 lib
drwxr-xr-x   2 root root 4096 Jul 27 20:02 lib64
drwxr-xr-x   2 root root 4096 Jul 27 20:02 media
drwxr-xr-x   2 root root 4096 Jul 27 20:02 mnt
drwxr-xr-x   2 root root 4096 Jul 27 20:02 opt
dr-xr-xr-x 267 root root    0 Aug  3 13:11 proc
drwx------   2 root root 4096 Jul 27 20:02 root
drwxr-xr-x   3 root root 4096 Jul 27 20:02 run
drwxr-xr-x   2 root root 4096 Jul 27 20:03 sbin
drwxr-xr-x   2 root root 4096 Jul 27 20:02 srv
dr-xr-xr-x  13 root root    0 Aug  3 13:11 sys
drwxr-xr-x   2 root root 4096 Aug  3 08:26 <span style="color: #0000ff;"><strong>test-data</strong></span>
drwxrwxrwt   2 root root 4096 Jul 27 20:03 tmp
drwxr-xr-x  10 root root 4096 Jul 27 20:02 usr
drwxr-xr-x  11 root root 4096 Jul 27 20:02 var```


We can navigate into that folder and create a 100 M data file with random data.

root@d4ac8c89437f:~# cd /test-data/ root@d4ac8c89437f:/test-data# dd if=/dev/urandom of=100M.dat bs=1M count=100 100+0 records in 100+0 records out 104857600 bytes (105 MB) copied, 6.69175 s, 15.7 MB/s root@d4ac8c89437f:/test-data# du -h . 101M .



When we exit the container, we can see the file in the host file system  here:

stefan@stefan-desktop:~$ sudo ls -l /var/lib/docker/volumes/data-volume-test/_data insgesamt 102400 -rw-r–r– 1 root root 104857600 Aug 3 15:17 100M.dat```

We can use this volume transparently in the container, but it is not depending on the container itself. So whenever we have to delete to container or want to use the data with a different container, this solution works perfectly. Thw following command shows how we mount the same volume in an Ubuntu container and execute the ls command to show the content of the directory.

stefan@stefan-desktop:~$ sudo docker run -it -v data-volume-test:/test-data-from-debian --name ubuntu-test ubuntu:16.10 ls -l /test-data-from-debian
total 102400
-rw-r--r-- 1 root root 104857600 Aug  3 13:17 100M.dat

You can display a lot of usefil information about a container with the inspect command. It also shows the data container and where it is mounted.

sudo docker inspect ubuntu-test

...
        "Mounts": [
            {
                "Name": "data-volume-test",
                "Source": "/var/lib/docker/volumes/data-volume-test/_data",
                "Destination": "/test-data-from-debian",
                "Driver": "local",
                "Mode": "z",
                "RW": true,
                "Propagation": "rprivate"
            }
        ],
...```


We delete the ubuntu container and create a new one. We then start the container, open a bash session and write some test data into the directory.

stefan@stefan-desktop:~$ sudo docker create -v data-volume-test:/test-data-ubuntu -t –name ubuntu-test ubuntu:16.10 f3893d368e11a32fee9b20079c64494603fc532128179f0c08d10321c8c7a166 stefan@stefan-desktop:~$ sudo docker start ubuntu-test ubuntu-test stefan@stefan-desktop:~$ sudo docker exec -it ubuntu-test bash root@f3893d368e11:/# cd /test-data-ubuntu/ root@f3893d368e11:/test-data-ubuntu# ls 100M.dat root@f3893d368e11:/test-data-ubuntu# touch ubuntu-writes-a-file.txt



When we check the Debian container, we can immediately see the written file, as the volume is transparently mounted.

stefan@stefan-desktop:~$ sudo docker exec -i -t debian-test ls -l /test-data total 102400 -rw-r–r– 1 root root 104857600 Aug 3 13:17 100M.dat -rw-r–r– 1 root root 0 Aug 3 13:42 ubuntu-writes-a-file.txt```

Please be aware that the docker volume is just a regular folder on the file system. Writing from both containers the same file can lead to data corruption. Also remember that you can read and write the volume files directly from the host system.

Backups and Migration

Backing up data is also an important aspect when you use named data volumes as shown above. Currently, there is no way of moving Docker containers or volumes natively to a different host. The intention of Docker is to make the creation and destruction  of containers very cheap and easy. So you should not get too attached to your containers, because you can re-create them very fast. This of course is not true for the data stored in volumes. So you need to take care of your data yourself, for instance by creating automated backups like this sudo tar cvfz Backup-data-volume-test.tar.gz /var/lib/docker/volumes/data-volume-test and re-store the data when needed in a new volume. How to backup volumes using a container is described here.

Plotting Colourful Graphs with R, RStudio and Ggplot2

The Aesthetics of Data Science

Data visualization is a powerful tool for communicating results and recently receives more and more attention due to the hype of data science. Integrating a meaningful graph into a paper or your thesis could improve readability and understandability more than any formulas or extended textual descriptions can. There exists a variety of different approaches for visualising data. Recently a lot of new Javascript based frameworks have gained quite some momentum, which can be used in Web applications and apps. A more classical work horse for data science is the R project and its plotting engine ggplot2. The reason why I decide to stick with R is its popularity and flexibility, which is still  impressive. Also with RStudio, there exists a convenient IDE which provides useful features for data scientists.

Plotting Graphs

In this blog post, I demonstrate how to plot time series data and use colours to highlight a specific aspect of data. As almost all techniques, R and ggplot2 require practise and training, which I realised again today when I spent quite a bit of time struggling with getting a simple plot right.

Currently I am evaluating two systems I developed and I needed to visualize their storage and execution time demands in comparison. My goal was to create a plot for each non-functional property, the execution time and the storage demand, while each plot should depict both systems’ performance. Each system runs a set of operations, think of create, read, update and delete operations (CRUD). Now for visualizing which of these operations has the most effects on the system, I needed to colourise each operation within one graph. This is the easy part. What was more tricky is to provide for each graph a defined set of colours, which can be mapped to each instance of the variable. Things which have the same meaning in both graphs should visualized in the same way, which requires a little hack.

Prerequisits

Install the following packages via apt

sudo apt-get install r-base r-recommended r-cran-ggplot2

and RStudio by downloading the deb – File from the project homepage.

Evaluation Data

As an example,we plan to evaluate the storage demand of two different systems and compare the results. Consider the following sample data.

# Set seed to get the same random numbers for this example
set.seed(42);
# Generate 200 random data records
N <- 200
# Generate a random, increasing sequence of integers that we assume is the storage demand in some unit
storage1 =sort(sample(1:100000, size = N, replace = TRUE),decreasing = FALSE)
storage2 = sort(sample(1:100000, size = N, replace = TRUE),decreasing = FALSE)
# Define the operations availabel and draw a random sample
operationTypes = c('CREATE','READ','UPDATE','DELETE')
operations = sample(operationTypes,N,replace=TRUE)
# Create the dataframe
df  df
     id storage1 storage2 operations
1     1       24      238     CREATE
2     2      139     1755     UPDATE
3     3      158     1869     UPDATE
4     4      228     2146       READ
5     5      395     2967     DELETE
6     6      734     3252     CREATE
7     7      789     4049     DELETE
8     8     2909     4109       READ
9     9     3744     4835     CREATE
10   10     3894     4990       READ

....

We created a random data set simulating the characteristics of system measurement data. As you can see, we have a list of operations of the four types CREATE, READ, UPDATE and DELETE and a measurement value for the storage demand in both systems.

The Simple Plot

Plotting two graphs of thecolumns storage1 and storage2 is straight forward.

# Simple plot
p1 <- ggplot(df, aes(x,y)) +
  geom_point(aes(x=id,y=storage1,color="Storage 1")) +
  geom_point(aes(x=id,y=storage2,color="Storage 2")) +
  ggtitle("Overview of Measurements") +
  xlab("Number of Operations") +
  ylab("Storage Demand in MB") +
  scale_color_manual(values=c("Storage 1"="forestgreen", "Storage 2"="aquamarine"), 
                     name="Measurements", labels=c("System 1", "System 2"))

print(p1)

We assign for each point plot a color. Note that the color nme “Storage 1” for instance of course does not denote a color, but it assignes a level for all points of the graph. This level can be thought of as a category, which ensures that all the points which belong to the same category have the same color. As you can see at the definition of the color scale, we assign the actual color to this level there.  This is the result:

Plotting Levels

A common task is to visualise categories or levels of measurement data. In this example, there are four different levels we could observe: CREATE, READ, UPDATE and DELETE.

# Plot with levels
p1 <- ggplot(df, aes(x,y)) +
  geom_point(aes(x=id,y=storage1,color=operations)) +
  geom_point(aes(x=id,y=storage2,color=operations)) +
  ggtitle("Overview of Measurements") +
  labs(color="Measurements") +
  scale_color_manual(values=c("CREATE"="darkgreen", 
                              "READ"="darkolivegreen", 
                              "UPDATE"="forestgreen", 
                              "DELETE"="yellowgreen"))
print(p1)

Instead of assigning two colours, one for each graph, we can also assign colours to the operations. As you can see in the definition of the graphs and the colour scale, we map the colours to the variable operations instead. As a result we get differently coloured points per operation, but we get these of course for both graphs in an identical fashion as the categories are the same for both measurements. The result looks like this:

Now this is obviously not what we want to achieve as we cannot differentiate between the two graphs any more.

Plotting the same Levels for both Graphs in Different Colours

This last part is a bit tricky, as ggplot2 does not allow assigning different colour schemes within one plot. There do exist some hacks for this, but the solution does not improve the readability of the code in my opinion. In order to apply different colour schemes for the two graphs while still using the categories, I appended two extra columns to the data set. If we append some differentiation between the two graphs and basically double the categories from four to eight, where each graph now uses its own four categories, we can also assign distinct colours to them.

df$operationsStorage1 <- paste(df$operations,"-Storage1", sep = '')
df$operationsStorage2 <- paste(df$operations,"-Storage2", sep = '')

p3 <- ggplot(df, aes(x,y)) +
  geom_point(aes(x=id,y=storage1,color=operationsStorage1)) +
  geom_point(aes(x=id,y=storage2,color=operationsStorage2)) +
  ggtitle("Overview of Measurements") +
  xlab("Number of Operations") +
  ylab("Storage Demand in MB") +
  labs(color="Operations") +
  scale_color_manual(values=c("CREATE-Storage1"="darkgreen", 
                              "READ-Storage1"="darkolivegreen", 
                              "UPDATE-Storage1"="forestgreen", 
                              "DELETE-Storage1"="yellowgreen",
                              "CREATE-Storage2"="aquamarine", 
                              "READ-Storage2"="dodgerblue",
                              "UPDATE-Storage2"="royalblue",
                              "DELETE-Storage2"="turquoise"))
print(p3)

We then assign the new column for each system individually as colour value. This ensures that each graph only considers the categories that we assigned in this step. Thus we can assign a different color scheme for wach graph and print the corresponding colours in the label (legend) next to the chart. This is the result:

Now we can see which operation was used at every measurement and still be able to distinguish between the two systems.

Von der Hungerburg zur Umbrüggler Alm, zur Höttinger Alm und zur Seegrube

Ausgehend vom Wanderwege-Hub Hungerburg, erreicht man hinter der Talstation die ausgeschilderten Wanderwege zur Umbrüggeler Alm und zur Arzler Alm.

Hält man sich links, gelangt man nach nur 30 Minuten die Umbrüggler Alm (Beschreibung hier). Man folgt der Beschilderung weiter und gelangt über Forstwege, einen kurzen Waldabschnitt und über die Skipiste nach einer weiteren Stunde zur Höttinger Alm (1487m). Als beinahe etwas bösartig erweist sich der letzte Anstieg, ruhigen Schrittes vorbei an den Rindern, bis zur Alm. Die letzten Höhenmeter ziehen sich, da man die Alm bereits gut im Blick hat.

Nach einer Stärkung geht es weiter entlang des Forstweges Richtung Nordosten. Man folgt der Beschilderung zur Bodensteinalm und quert nahezu steigungsfrei den Hang, bis man eine Gabelung erreicht. Man hält sich bergwärts und erreicht nach ca. 200 Höhenmetern die Bodensteinalm (1661m) entlang eines Forstweges. Man kann nun entweder der Forststraße bis zur Seegrube folgen, oder quält sich über den Seegrubenbahnsteig unterhalb der ebensolchen hinauf bis zur Bergstation. Von der Höttinger Alm bis zur Seegrubenbahnbergstation (1905) benötigt man etwa eineinhalb Stunden. Belohnt wird man mit einem traumhaften Ausblick über Innsbruck und das Inntal sowie auf die unzähligen Touristen in Flipflops und Seidenblusen. Man schließe sich diesen an und gleite bequem ins Tal. Eine weitere Beschreibung der Tour fndet sich hier.

Timelape Photography with the Camera Module V2 and a Raspberry Pi Model B

Recently, I bought a camera module for the Raspberry Pi and experimented a little bit with the possibilities a scriptable camera provides. The new Camera Module V2 offers 8.08 MP from a Sony sensor and can be controlled with a well documented Python library. It allows to take HD videos and shoot still images. Assembly is easy, but as the camera is attached with a rather short ribbon cable, which renders the handling is a bit cumbersome. For the moment, a modified extra hand from my soldering kit acts as a makeshift.

Initial Setup

The initial setup is easy and just requires a few steps, which is not surprising because most of the documentation is targeted to kids in order to encourage their inner nerd. Still works for me as well 🙂

Attach the cable to the raspberry pi as described here. You can also buy an adapter for the Pi Zero. Once the camera is wired to the board, activate the module with the tool raspi-config.

Then you are ready to install the Python library with sudo apt-get install python3-picamera, add your user to the video group with usermod -a -G video USERNAME  and then reboot the Raspberry. After you logged in again, you can start taking still images with the simple command raspistill -o output.jpg. You can find some more documentation and usage examples here.

Timelapse Photography

What I really enjoy is making timelapse videos with the Raspberry Pi, which gives a nice effect for everyday phenomena and allows to observe processes which are usually too slow to follow. The following Gif shows a melting ice cube. I took one picture every five seconds.

A Small Python Script

The following script creates a series of pictures with a defined interval and stores all images with a filename indicating the time of shooting in a folder. It is rather self explanatory. The camera needs a little bit of time to adjust, so we set the adjustTime variable to 5 seconds. Then we take a picture every 300 seconds, each image has a resolution of 1024×768 pixels.

import os
import time
import picamera
from datetime import datetime

# Grab the current datetime which will be used to generate dynamic folder names
d = datetime.now()
initYear = "%04d" % (d.year)
initMonth = "%02d" % (d.month)
initDate = "%02d" % (d.day)
initHour = "%02d" % (d.hour)
initMins = "%02d" % (d.minute)
initSecs = "%02d" % (d.second)

folderToSave = "timelapse_" + str(initYear) + str(initMonth) + str(initDate) +"_"+ str(initHour) + str(initMins)
os.mkdir(folderToSave)

# Set the initial serial for saved images to 1
fileSerial = 1

# Create and configure the camera
adjustTime=5
pauseBetweenShots=300

# Create and configure the camera
with picamera.PiCamera() as camera:
    camera.resolution = (1024, 768)
    #camera.exposure_compensation = 5

    # Start the preview and give the camera a couple of seconds to adjust
    camera.start_preview()
    try:
        time.sleep(adjustTime)

        start = time.time()
        while True:
            d = datetime.now()
            # Set FileSerialNumber to 000X using four digits
            fileSerialNumber = "%04d" % (fileSerial)

            # Capture the CURRENT time (not start time as set above) to insert into each capture image filename
            hour = "%02d" % (d.hour)
            mins = "%02d" % (d.minute)
            secs = "%02d" % (d.second)
            camera.capture(str(folderToSave) + "/" + str(fileSerialNumber) + "_" + str(hour) + str(mins) + str(secs) + ".jpg")

            # Increment the fileSerial
            fileSerial += 1
            time.sleep(pauseBetweenShots)

    except KeyboardInterrupt:
        print ('interrupted!')
        # Stop the preview and close the camera
        camera.stop_preview()

finish = time.time()
print("Captured %d images in %d seconds" % (fileSerial,finish - start))

This script then can run unattended and it creates a batch of images on the Raspberry Pi.

Image Metadata

The file name preserves the time of the shot, so that we can see later when a picture was taken. But the tool also stores EXIF metadata, which can be used for processing. You can view the data with the exiftool.

>ExifTool Version Number         : 9.46
File Name                       : 1052.jpg
Directory                       : .
File Size                       : 483 kB
File Modification Date/Time     : 2016:07:08 08:49:52+02:00
File Access Date/Time           : 2016:07:08 09:19:14+02:00
File Inode Change Date/Time     : 2016:07:08 09:17:52+02:00
File Permissions                : rw-r--r--
File Type                       : JPEG
MIME Type                       : image/jpeg
Exif Byte Order                 : Big-endian (Motorola, MM)
Make                            : RaspberryPi
Camera Model Name               : RP_b'imx219'
X Resolution                    : 72
Y Resolution                    : 72
Resolution Unit                 : inches
Modify Date                     : 2016:07:05 08:37:33
Y Cb Cr Positioning             : Centered
Exposure Time                   : 1/772
F Number                        : 2.0
Exposure Program                : Aperture-priority AE
ISO                             : 50
Exif Version                    : 0220
Date/Time Original              : 2016:07:05 08:37:33
Create Date                     : 2016:07:05 08:37:33
Components Configuration        : Y, Cb, Cr, -
Shutter Speed Value             : 1/772
Aperture Value                  : 2.0
Brightness Value                : 2.99
Max Aperture Value              : 2.0
Metering Mode                   : Center-weighted average
Flash                           : No Flash
Focal Length                    : 3.0 mm
Maker Note Unknown Text         : (Binary data 332 bytes, use -b option to extract)
Flashpix Version                : 0100
Color Space                     : sRGB
Exif Image Width                : 1024
Exif Image Height               : 768
Interoperability Index          : R98 - DCF basic file (sRGB)
Exposure Mode                   : Auto
White Balance                   : Auto
Compression                     : JPEG (old-style)
Thumbnail Offset                : 1054
Thumbnail Length                : 24576
Image Width                     : 1024
Image Height                    : 768
Encoding Process                : Baseline DCT, Huffman coding
Bits Per Sample                 : 8
Color Components                : 3
Y Cb Cr Sub Sampling            : YCbCr4:2:0 (2 2)
Aperture                        : 2.0
Image Size                      : 1024x768
Shutter Speed                   : 1/772
Thumbnail Image                 : (Binary data 24576 bytes, use -b option to extract)
Focal Length                    : 3.0 mm
Light Value                     : 12.6

Processing Images

The Raspberry Pi would need a lot of time to create an animated Gif or a video from these images. This is why I decided to add new images automatically to a Git repository on Github and fetch the results on my Desktop PC. I created a new Git repository and adapted the script shown above to store the images within the folder of the repository. I then use the following script to add and push the images to Github using a cronjob.

>cd /home/stefan/Github/Timelapses
now=$(date +"%m_%d_%Y %H %M %S")
echo $now
git pull
git add *.jpg
git commit -am "New pictures added $now"
git push

You can add this to you user’s cron table with crontab -e and the following line, which adds the images every 5 minutes,

*/5	*	*	*	*	/home/stefan/Github/Timelapses/addToGit.sh

On a more potent machine, you can clone the repository and pull the new images like this:

cd /home/stefan-srv/Github/Timelapses
now=$(date +"%m_%d_%Y %H %M %S")
echo "$now"
git pull --rebase

The file names are convenient for being able to read the date when it was taken, but most of the Linux tools require the files to be named within a sequence. The following code snippet renames the files into a sequence with four digits and pads them with zeros if possible.

>a=1
for i in *.jpg; do
  new=$(printf "%04d.jpg" "$a") #04 pad to length of 4
  mv -- "$i" "$new"
  let a=a+1
done

Animated Gifs

Imagemagick offers a set of great tools for images processing. With its submodule convert, you can create animated Gifs from a series of images like this:

convert -delay 10 -loop 0 *.jpg Output.gif

This adds a delay after each images and loops the gif images infinitely. ImageMagick requires a lot of RAM for larger Gif images and does not handle memory allocation well, but the results are still nice. Note that the files get very large, so a smaller resolution might be more practical.

Still Images to Videos

The still images can also be converted in videos. Use the following command to create an image with 10 frames per second:

>avconv -framerate 10 -f image2 -i %04d.jpg -c:v h264 -crf 1 out.mov

Example: Nordkette at Innsbruck, Tirol

This timelapse video of the Inn Valley Range in the north of the city of Innsbruck has been created by taking a picture with a Raspberry Pi Camera Module V2 every 5 minutes. This video consists of 1066 still images.