Persist Web Radio Streams in the Pi MusicBox

After upgrading my Raspberry Pi Model B to the new Raspberry Pi 2 a few weeks a go, I finally found a new purpose for the old device. There exist several images for the Pi with a focus on with Internet radio streaming. After doing a little research, I decided to use the Pi MusicBox, which comes with a headless audio player and supports a whole lot of different sources and formats: Spotify, SoundCloud, Google Music, Podcasts (with iTunes, gPodder directories), local and networked music files (MP3/OGG/FLAC/AAC), Webradio (with TuneIn, Dirble, AudioAddict, Soma FM directories), Subsonic. The system is rather easy to setup, descriptions are available at the project homepage. The Pi MusicBox comes with a nice responsive Web interface which allows browsing and your music sources via SD card, USB drive or even network shares. Additionally you can integrate your music streaming services.

So I finally upgraded my kitchen radio by attaching the Pi via its analogue audio outlet. Of course this is nothing for audiophiles, but neither are Web streames usually. I am quite happy with the quality. All I needed was a the raspberry pi, a phone charger. a 1GB SD card, the Edimax Wifi Dongle and a audio cable for my radio. The setup is straightforward, although it took a few attempts to put the image on the SD card. You then just need to enter the Wifi connection data and boot the device. The boot process is rather slow, even compared to other raspberry pi operating systems. If you plan to use your Android phone for controlling the streaming services, you should consider providing a static IP as Linux doesn’t always support zeroconf out of the box.

Another thing which needs tweeking is the list of Web radios. Currently the software does not store them persistently on the drive of the pi, but rather in your browser. Therefore you do not have access previously stored radio stations on alldevices or if you change your browser. Adding the radio streams persistently is fortunately very easy.

  1. Navigate to the settings section in the Web interface and select MusicBox. Activate SSH and assign a root password.

  2. Reboot

  3. Connect via ssh to the device by using the password you just specified ssh root@192.168.0.5

  4. Search for a preconfigured radio station to find out where the systems stores the URIs root@musicpi:~# grep -r "WorldService" /

  5. Edit the file and add your favorite radio stations root@musicpi:~# nano /boot/config/streamuris.js

The radio stations are now available in the Web interface for all your devices. The file should looke like this:

//root@musicpi:/# cat boot/config/streamuris.js
/**
* Default streamuris which appear in the webinterface. Edit if you like.
* Take care when editting. Only edit the stuff between ''
* And don't use the default Windows Notepad for this (use Notepad++ on Windows)
*/

var streamUris = [];
//fill with defaults
//template:
// streamUris.push(['', '']);
streamUris.push(['Swiss Jazz', 'http://www.radioswissjazz.ch/live/aacp.m3u']);
streamUris.push(['Swiss Classic', 'http://www.radioswissclassic.ch/de/live/aacp.m3u']);
streamUris.push(['Swiss Groove', 'http://184.154.202.243:8053/']);
streamUris.push(['WeFunk', 'http://www.wefunkradio.com/play/radio.pls']);
streamUris.push(['Sputnik', 'http://www.sputnik.de/m3u/live.hi.m3u']);
streamUris.push(['Bayern Klassik', 'http://streams.br.de/br-klassik_2.m3u']);
streamUris.push(['Bayern Nachricnten', 'http://streams.br.de/b5aktuell_2.m3u']);
streamUris.push(['Byte FM', 'http://www.byte.fm/stream/bytefm.m3u']);
streamUris.push(['Klassik Radio 96', 'http://edge.live.mp3.mdn.newmedia.nacamar.net/klassikradio128/livestream.mp3']);
streamUris.push(['88.6 Wien', 'http://www.radiostream.de/stream/36889.asx']);
streamUris.push(['Edge Movie Sountracks', 'http://edge.live.mp3.mdn.newmedia.nacamar.net/klassikradiomovie128/livestream.mp3']);
streamUris.push(['Edge Oper', 'http://edge.live.mp3.mdn.newmedia.nacamar.net/klassikradioopera128/livestream.mp3']);
streamUris.push(['Edge Lounge', 'http://edge.live.mp3.mdn.newmedia.nacamar.net/klassikradiolounge128/livestream.mp3']);
streamUris.push(['Ö1 Live', 'mms://apasf.apa.at/oe1_live_worldwide']);
streamUris.push(['Ö1 Info', 'mms://apasf.apa.at/oe1_inforadio_worldwide']);
streamUris.push(['FM4', 'http://mp3stream1.apasf.apa.at:8000/listen.pls']);
streamUris.push(['SuperFly Wien', 'http://89.207.144.68:8080/live']);
streamUris.push(['BBC WorldService', 'http://vprbbc.streamguys.net:8000/vprbbc24.mp3']);
streamUris.push(['Groove Salad - Soma FM', 'http://somafm.com/groovesalad.pls']);
streamUris.push(['Kiss FM Berlin', 'http://stream.kissfm.de/kissfm/mp3-128/internetradio/']);
streamUris.push(['Diva Radio', 'http://195.154.116.117:10780']);

This is the result:

Alternatively you can also mount the SD card in your PC and edit the file with a little bit more comfort. You will find it in the config folder of the boot partition. Note that you probably have to clear your browser history before the newly added stations become visible.

Remember that the Pi does not have a power button. You can switch if off, but it remains connected to the power source all the time. Nevertheless you are not able to switch it on again afterwards, you need to reset it. The low power consumption is also a reason why the pi is such a nice device for server applications. If you forget to pause or stop the stream, the device will continuously create traffic. Therefore it might be a good advice to limit the connectivity during the night for instance or for using a timer clock.

Update (13.03.2016)

The Music Pi now moved into the living room and I connected it via HDMI my Yamaha RX receiver. No settings need to be adapted, after rebooting, the Pi will automatically use the only connected port, which is now the HDMI port instead of the analog output. Now being fully digital, the fidelity of the sound improved audibly and we now enjoy Radio Swizz Jazz every morning. Now the quality is really nice and a huge joy.

Update OwnCloud’s Admin UserID

After I installed OwnCloud on a local server, I realized that my choice of the administrative username was ambiguous and that I would never be able to remember it. The interface from OwnCloud allows you to change the full name of a user, but not the login name (user id) from a user via its interface. As OwnCloud stores its complete configuration in a database, I decided to update the information directly in the MySQL instance. The documentation of OwnCloud did not show exactly which tables join or use the ids of the user, so we needed to find out ourselves to be sure.

In a first step I created a dump of the OwnCloud database:

mysqldump -u root -p oc_database < oc_database.sql

The dump file contains the table descriptions as well as the INSERT statements you would need to recover the database. Therefore whenever the userid was used in the database, the INSERT statement would contain it. Thus an easy grep command delivers all records referencing or including the user id.

cat oc_database.sql | grep "YOUR-OLD-USERNAME"

You then get a list of thows lines in the file, which contain the userid you would like to change. As these are all INSERT statements, you can easily see which table is referencing the username. All you have to do now is to update all these tables, which have been delivered by the grep command. As the naming of the userid column is not completely consistent in OwnCloud, you should check each table individually, by issuing a descibe command beforehand:

DESCRIBE oc_contacts_addressbooks;

Then you can be sure which column you need to update. The UPDATE statement is potentially harmful, if you forget to set the WHERE clause properly, all you data will be overwritten. As we have a dump anyways, this would not be critical, but nevertheless its better to check in advance. You could verify which rows would be deleted by issuing a SELECT statement with the same WHERE clause before.

In my case the OwnCloud instance was rather young, so only a few tables contained traces of the old username. In my case, the following update statements have been necessary, your milage may vary. In some cases, the userid is not normalized (i.e. a seperate column), but used as a string in another column. One example for this would be the oc_storages table, where the userid is used in the column id like this “home::YOUR-NEW-USERNAME”. So remember to update the complete string, not just the username.

UPDATE oc_contacts_addressbooks SET userid="YOUR-NEW-USERNAME" WHERE userid="YOUR-OLD-USERNAME";
UPDATE oc_group_user SET uid="YOUR-NEW-USERNAME" WHERE uid="YOUR-OLD-USERNAME";
UPDATE oc_preferences SET userid="YOUR-NEW-USERNAME" WHERE userid="YOUR-OLD-USERNAME";
UPDATE oc_storages SET id="home::YOUR-NEW-USERNAME" WHERE id="home::YOUR-OLD-USERNAME";
UPDATE oc_users SET uid="YOUR-NEW-USERNAME"  WHERE uid="YOUR-OLD-USERNAME";

After this change, verify that a new data directory for the new user has been created and move the data accordingly.

There was always snow at Christmas – Retrieving and processing historical weather data with Forecast IO

The weather was unusually mild this winter and many people seemed to remember back to the days of their childhood. In our memories, there was always snow at Christmas, but is that actually true? The curious among us know what to do, so lets see what the Internet has to offer for verifying our hypothesis. Unfortunately there are hardly any open weather data sources allowing to retrieve the weather conditions for any historical date., but in this short blog post I would like to show you how you can combine a few APIs and a little Java programming in order to retrieve the appropriate data. The complete project can be downloaded at GitHub.

What we need

We will use the following technologies to get what we want:

  • Google Geo-Coding API to retrieve longitude and latitude coordinates
  • Forecast IO to retrieve the weather data for a specific location
  • MySQL database to store the data
  • Hibernate
  • Maven

What we want

We want to retrieve weather data for a specific location at a given time. This data should be stored in a relational database which allows processing the data conveniently.

How to get started

Recently I switched from Eclipse to IntelliJ and I am quite happy with it. What ever IDE you prefer, create a new Maven project and copy the following dependencies into the pom.xml in order to retrieve the libraries.

<dependencies<
<!-- Forecast IO API Wrapper --<
<dependency<
<groupId<com.github.dvdme</groupId<
<artifactId<ForecastIOLib</artifactId<
<version<1.5.1</version<
</dependency<

<!-- Google Geo API --<
<dependency<
<groupId<com.google.code.geocoder-java</groupId<
<artifactId<geocoder-java</artifactId<
<version<0.16</version<
</dependency<

<!--. Hibernate --<
<dependency<
<groupId<org.hibernate</groupId<
<artifactId<hibernate-core</artifactId<
<version<4.3.4.Final</version<
</dependency<
<dependency<
<groupId<org.hibernate</groupId<
<artifactId<hibernate-envers</artifactId<
<version<4.3.5.Final</version<
</dependency<

<!-- MySQL --<
<dependency<
<groupId<mysql</groupId<
<artifactId<mysql-connector-java</artifactId<
<version<5.1.31</version<
</dependency<
</dependencies<

Setup the Backend

In order to store the data we want to retrieve, we will create a Java POJO mapping the data structure from the server response. We also need to setup MySQL and Hibernate. The structure of the project is shown in the following image:

Create a new database in MySQL and assign user rights. You could also consider using an embedded database if you do not already have a running MySQL instance. The Hibernate configuration file is shown below:

<?xml version="1.0" encoding="UTF-8"?<

<!DOCTYPE hibernate-configuration PUBLIC "-//Hibernate/Hibernate Configuration DTD 3.0//EN"
"http://hibernate.sourceforge.net/hibernate-configuration-3.0.dtd"<
<hibernate-configuration<
<session-factory<

<property name="hibernate.dialect"<org.hibernate.dialect.MySQL5InnoDBDialect</property<
<property name="show_sql"<false</property<
<!--  drop the tables every time: create-drop --<
<!--  use auto or update or create-drop --<
<property name="hibernate.hbm2ddl.auto"<validate</property<

<property name="hibernate.current_session_context_class"<thread</property<
<property name="format_sql"<true</property<
<property name="use_sql_comments"<false</property<

<property name="hibernate.connection.autocommit"<true</property<

<property name="hibernate.jdbc.batch_size"<20</property<

<mapping class="WeatherData.WeatherData"/<

</session-factory<

</hibernate-configuration<

The Hibernate session management is controlled with the following class. As you can see, sensitive information is read from Java property files.

public class HibernateUtil {
private static SessionFactory sessionFactory;
private static ServiceRegistry serviceRegistry;

static {
try {

String filename = "db.properties";
Properties prop = null;

prop = Helpers.readPropertyFile(filename);

String dbhost=prop.getProperty("dbhost");
String dbport=prop.getProperty("dbport");
String dbname=prop.getProperty("dbname");
String dbuser=prop.getProperty("dbuser");
String dbpw=prop.getProperty("dbpassword");

String mysqlString = "jdbc:mysql://" + dbhost + ":"+ dbport+ "/"+ dbname;
System.out.println("db string_ " + mysqlString);
Properties extraProperties=new Properties();
extraProperties.setProperty("hibernate.connection.url",mysqlString);
extraProperties.setProperty("hibernate.connection.username",dbuser);
extraProperties.setProperty("hibernate.connection.password",dbpw);

Configuration configuration = new Configuration();
configuration=configuration.configure("hibernate.cfg.xml");
configuration=configuration.addProperties(extraProperties);

configuration.configure();

serviceRegistry = new StandardServiceRegistryBuilder().applySettings(configuration.getProperties()).build();
sessionFactory = configuration.buildSessionFactory(serviceRegistry);

} catch (HibernateException he) {
System.err.println("Error creating Session: " + he);
throw new ExceptionInInitializerError(he);
}
}

public static SessionFactory getSessionFactory() {
return sessionFactory;
}
}

The database access information will be read during the runtime from the file db.properties, this gives you more flexibility an easier access to the sensitive data. We will use the same technique for the API key that you need to use the forecast API (see below). The property file for the database access simply contains this data:

# host address
dbhost=localhost
# port
dbport=3306
# database name
dbname=<<databasename<<
#user name
dbuser=<<username<<
# password
dbpassword=<<password<<

The Hibernate snippet for storing the data is very simple, it opens a connection and stores the data, which is a Weather Data object.

this.session = HibernateUtil.getSessionFactory().openSession();
this.session.beginTransaction();
this.session.save(weatherData);
this.session.getTransaction().commit();
this.session.flush();
this.session.close();

Google Geo Coder API

Google offers a convenient API which provides you the geo coordinates from any specific address that you provide. The following snippet retrieves the longitude and latitude coordinates. The following snippet shoes you how to get the data:

final Geocoder geocoder = new Geocoder();
GeocoderRequest geocoderRequest = new GeocoderRequestBuilder().setAddress(locationAddress).setLanguage("en").getGeocoderRequest();
GeocodeResponse geocoderResponse = null;

try {
geocoderResponse = geocoder.geocode(geocoderRequest);
} catch (IOException e) {
e.printStackTrace();
}

List<GeocoderResult< geoResultList = geocoderResponse.getResults();

System.out.println("Retrieved geo");
for (GeocoderResult result : geoResultList) {
address = result.getFormattedAddress();
GeocoderGeometry geometry = result.getGeometry();
latitude = String.valueOf(geometry.getLocation().getLat());
longitude = String.valueOf(geometry.getLocation().getLng());

System.out.println(result.getFormattedAddress() + " lat: " + latitude + " long: " + longitude);

}

Now that we have the coordinates, we can pass this data to the Forecast API.

Powered by Forecast IO

Forecast IO is a service which offers a REST API which you can call for retrieving historical weather data for any specific location. You need to register if you want to call the API, which gives you 1000 calls per day for free. The API is very well described and simple to use. We will use the Forecast IO API Wrapper in order to call the API from within Java. The API key is also stored in a property file called api.properties. Copy your code in this file.

forecast-key=<<YOUR API CODE<<

In Java we read teh key with the following snippet:

// set the API key
Helpers helper;
helper = new Helpers();
String filename = "api.properties";
Properties prop = null;
prop = Helpers.readPropertyFile(filename);
this.APIKEY = prop.getProperty("forecast-key");

Now you can access the API with the wrapper library. The code snippet retrieves the hourly weather data for a specified date.

        ForecastIO fio = new ForecastIO(this.APIKEY); //instantiate the class with the API key.
        fio.setUnits(ForecastIO.UNITS_SI);             //sets the units as SI - optional

        fio.setLang(ForecastIO.LANG_ENGLISH);

        fio.setTime(dateString);

        fio.getForecast(latitude, longitude);
      //  System.out.println("Latitude: " + fio.getLatitude());
      //  System.out.println("Longitude: " + fio.getLongitude());
      //  System.out.println("Timezone: " + fio.getTimezone());

        String key ="";
        String value ="";

        FIOHourly hourly = new FIOHourly(fio);

We then need to store the data in a Java object in order to persist it in the database.

        for (int i = 0; i < hourly.hours(); i++) {
            String[] h = hourly.getHour(i).getFieldsArray();
            String hour = String.valueOf(i + 1);
            System.out.println("Hour #" +hour);

            /**
             * Populate the map of data values
             */
            weatherDataHashMap.clear();
            weatherDataHashMap.put("Hour",hour);
            for (int j = 0; j < h.length; j++){

                key = h[j];
                value = hourly.getHour(i).getByKey(h[j]);
                if (value == null){
                    System.out.println("value war NULL");
                    value="";
                }
                System.out.println(key + ": " + value);
                System.out.println("\n");

                weatherDataHashMap.put(key,value);

            }</pre<
<pre<```


In a last step we need to populate the Java object and persist it in the database.

/**

  • Create the weather object */ WeatherData hourData; hourData = new WeatherData(); System.out.println("———- " + weatherDataHashMap.get(“Hour”)); hourData.setHour(this.parseIntValue(weatherDataHashMap.get(“Hour”))); hourData.setSummary(weatherDataHashMap.get(“summary”)); hourData.setIcon(weatherDataHashMap.get(“icon”)); hourData.setWindspeed(this.parseDoubleValue(weatherDataHashMap.get(“windSpeed”))); Date measureData = this.getDateFromString(weatherDataHashMap.get(“time”)); hourData.setWeather_timestamp(measureData); hourData.setHumidity(this.parseDoubleValue(weatherDataHashMap.get(“humidity”))); hourData.setVisibility(this.parseDoubleValue(weatherDataHashMap.get(“visibility”))); hourData.setWindBearing(this.parseIntValue(weatherDataHashMap.get(“windBearing”))); hourData.setApparentTemperature(this.parseDoubleValue(weatherDataHashMap.get(“apparentTemperature”))); hourData.setWindBearing(this.parseIntValue(weatherDataHashMap.get(“windBearing”))); hourData.setPrecipProbability(this.parseDoubleValue(weatherDataHashMap.get(“precipProbability”))); hourData.setPrecipIntensity(this.parseDoubleValue(weatherDataHashMap.get(“precipIntensity”))); hourData.setDewPoint(this.parseDoubleValue(weatherDataHashMap.get(“dewPoint”))); hourData.setTemperature(this.parseDoubleValue(weatherDataHashMap.get(“temperature”))); hourData.setPrecipType(this.removeQuotes(weatherDataHashMap.get(“precipType”))); hourData.setAddress(address); hourData.setLatitude(latitude); hourData.setLongitude(longitude);

this.persist(hourData);



If you loop over the data you can retrieve a lot of interesting weather data from the services and perform your analysis. You can use the MySQL Workbench directly or export the data for instance into CSV.

[<img class="aligncenter size-large wp-image-659" src="./media/2015/01/results-1024x275.png" alt="Results" width="730" height="196" srcset="./media/2015/01/results-1024x275.png 1024w, ./media/2015/01/results-300x81.png 300w, ./media/2015/01/results.png 1529w" sizes="(max-width: 730px) 100vw, 730px" />][3]

## The Chart

Now that we have the data, we can use a visualization API such as <a href="https://developers.google.com/chart/" target="_blank">Google Charts</a> for displaying and interpreting the data. All you need to do is export the data into an appropriate format. The following SQL statement retrieves all measurements at 11 p.m. for the available Christmas days and transforms them into the format which is expected by the Google Charts API.

SELECT DISTINCT CONCAT("[new Date (",YEAR(weather_timestamp),").12.24,",temperature,"],") FROM weatherDB.WeatherData WHERE hour=23 ORDER BY weather_timestamp DESC;



Then you can use the following HTML and JavaScript code for rendering the data.

Obviously you could also automate the creation of the chart as well. The complete example shown in this post can be downloaded from my GitHub page.

Debugging Thunderbird Problems with Linux

Sometimes it can be hard to reproduce problems related to Email as many factors can be a cause for the trouble. Thunderbird (and all other Mozilla products) provide logging facilities which allow tracing the activities of the application and to detect the source of an error. The logging mechanism is activated by setting two environment variables as described in the documentation. The following two lines will set the environment

export NSPR_LOG_MODULES=imap:2,smpt:3,timestamp
export NSPR_LOG_FILE=/tmp/mail.log

Thunderbird reads these variables during startup and if the NSPR_LOG_MODULES is set, the application writes logging information into the file specified in NSPR_LOG_FILE. The logging module allows to specifiy which protocol you are interested in ( e.g. imap, smtp, pop etc) and also to provide a log level from (1 to 5, where 5 ist the most verbose). You can combine several protocols by seperating them with a comma. With the timestamp property set, the logger prints a timestamp next to each log message which allows easier tracing of the output. When you

You can use the tail command to read the output in real time.

tail -f /tmp/mail.log

Unfortunately there is a little catch which is not mentioned in the documentation as it is Linux distribution specific. When you installed Thunderbird manually (e.g. in /opt/), the log file remains empty and the mechanism does not seem to work. The solution is that you need to start thunderbird from its installation directory and not by its shortcut in /usr/bin. Thus changing to the right directory and then launching Thunderbird directly allows traceing the logs as described above

/opt/thunderbird/thunderbird &

Build a SMS notification service

Automated event notification is useful for many applications. In a recent blog post, I presented a solution how to use Twitter for notifying about server events. In this post you learn how to achieve the same by using an SMS API. The rationale behind an SMS service is its robust availability.

There exist plenty of different services, I chose ClockworkSMS, as it provides a simple interface for several programming languages, plugins and a prepaid billing service. Once you registered and bought some credit, you can start using the API. Integrating the service into your applications is easy, as there does not only exist libraries for several languages (C#, Java, Node.js, PHP, Python, Ruby), but also HTTP requests are supported. Thus you can also send SMS text messages from command line. When you execute the following command in bash, a SMS message will be sent to the number provided containing the text, no magic involved.

curl https://api.clockworksms.com/http/send.aspx -d "key=YOUR-API-KEY&to=4361122334455&content=Hello+World"

Simply insert your personal API key, provide the receiver’s phone number and of course the content. The text of the message needs to be URL encoded. In order to advance the notification service, you can provide a call back URL which is called whenever a SMS message was sent.

Clockworksms provides HTTP GET, form posts and XML (bulk) POST requests which are issued against the URL provided. The service appends message metadata such as an id, a status code an the number of the receiver to the request. This information can be processed in custom applications which react on the data. In the easiest scenario, you can setup a virtual host and process the log files. Each sent SMS message will result to a request from the Clockworksms servers to your server and thus create a log record with the metadata of the sms message.

89.111.58.18 - - [30/Oct/2014:18:21:32 +0100] "GET /smsnotification.htm?msg_id=VE_282415671&status=DELIVRD&detail=0&to=4361122334455 HTTP/1.1" 200 330

The ClockworkSMS service also provides some plugins for WordPress, Nagios and other applications.

Using Twitter to get the current IP address of your Raspberry Pi

The Raspbery Pi can act as a rather neat server due to its low power consumption and its flexibility. As it is very small and can be powered with a standard dell phone charger, its convenient to take it with you. But running in headless mode (i.e. no monitor is attached) and without a keyboard connected it can be a pain to “find” the device in a network as it receives a dynamic IP address whenever it gets connected via ethernet or wifi. Usually you just want to have a SSH console, so I wrote a little script based on ttytter, a Perl twitter client which tweets the current device IP address as soon as a connection is established. All you need is a Pi, Perl and a Twitter account.

Change to the root account, get the ttytter script, rename it and make it executable:

cd /opt/
wget http://www.floodgap.com/software/ttytter/dist2/2.1.00.txt
mv 2.1.00.txt ttytter.pl
chmod +x ttytter.pl

Now execute the script and follow the instructions. You will open the Twitter OAuth page where you receive a PIN which authenticates the ttytter application with your account so that it can tweet updates. It will place a keyfile with the credentials in /.root/.keyfile. Copy this file to /opt/

Next we create a little script which parses the IP address currently assigned, I got it from here. Open a editor and save the file with the name tweetPiip.sh. The Raspberry Pi does not have a hardware clock, hence it does not know the time. Having the exact time is essential for the Twitter API, therefore we use ntpdate in order to set the correct time before we call the API.  After receiving the current time, the script calls the ttytter Twitter client and submits a tweet with the current IP address. Don’t forget the shebang in the beginning of the file and to make it executable with chmod +x.

#!/bin/bash

# Parse the IP adress
myip=
while IFS=$': \t' read -a line ;do
    [ -z "${line%inet}" ] && ip=${line[${#line[1]}>4?1:2]} &&
        [ "${ip#127.0.0.1}" ] && myip=$ip
  done< &lt;(LANG=C /sbin/ifconfig)

# Get current time from NTP server
ntpdate -q ntp.ubuntu.com
# Store current time
DATE=$(date +"%Y-%m-%d  %H:%M:%S")
#Tweet the IP address
echo -e "The pi just got $myip assigned at $DATE" | /opt/ttytter.pl -ssl -keyf=/opt/ttytterkey >/opt/tweetlog.txt 2&gt;&1

We want that script to be called whenever a new network connection has been established. Hence we simply add the script to the post-up hook of the network interfaces. This hook is called after an interface has been connected.

auto lo

iface lo inet loopback
iface eth0 inet dhcp
post-up /opt/tweetPiip.sh

allow-hotplug wlan0
iface wlan0 inet manual
wpa-roam /etc/wpa_supplicant/wpa_supplicant.conf
iface default inet dhcp
post-up /opt/tweetPiip.sh

This script obviously assumes that an Internet connection is available and it does not implement any checks in case there is no connectivity. Knowing which IP address got assigned can be useful in many scenarios and is not limited to the Pi.

Use a TP-Link TL-WR710N Router as a Repeater with OpenWRT

The  TL-WR710N offers five modes of operation:

  • Router mode
  • Client mode (connects ethernet devices to an existing wifi)
  • Repeater mode (extends existing wifis)
  • Access point mode (create a wifi hotspot from a wired connection)
  • WISP Client

Unfortunately the firmware which is installed on the device is very instable and barely usable for my purpose. I could not get connectivity by using the software which is pre-installed although it would offer a nice interface for all the mentioned modes.

Open source software to the rescue! Fortunately there exists a OpenWRT version which can be easily installed and which empowers this little device quite a bit. Have a look at the OpenWRT homepage and ensure that you have a supported hardware version (in Austria you will most likely get the EU 1.2 version with 8MB flash memory and 32MB RAM). You can use the upgrade dialog in the original firmware in order to upload the OpenWRT firmware. Be careful during these steps and avoid power outage and upload errors. Once you installed the new firmware, just follow these steps in order to setup a range extender with OpenWRT.

  1. Connect your laptop with an Ethernet cable to the TL-WR710N router.

  2. Establish a telnet session. The default IP is 192.168.1.1

telnet 192.168.1.1

There is no standard password.
3) For security reasons you should use a SSH session. To do so, you need to set a new root password by issuing the following command in the telnet session.

passwd

You will be prompted to enter the new root password twice. After you set the password, the SSH daemon becomes active and telnet gets disabled. Exit the telnet session with

exit
  1. Connect via SSH by using the following command:
ssh root@192.168.1.1

In the next step you need to configure the network interfaces. We want to have one Ethernet port eth1, one loopback device lo and of course the wireless interface wlan0.
Use the vi editor and open this file

vi /etc/config/network
  1. Enter this interfaces into the opened file.
config interface 'wan'
        option ifname 'wlan0'
        option proto 'dhcp'

config interface 'loopback'
    option ifname 'lo'
    option proto 'static'
    option ipaddr '127.0.0.1'
    option netmask '255.0.0.0'

config interface 'lan'
    option ifname 'eth1'
    option force_link '1'
    option type 'bridge'
    option proto 'static'
    option ipaddr '192.168.1.1'
    option netmask '255.255.255.0'
    option ip6assign '60'

Save and close with :wq

  1. Now we just need to configure the wireless connection. Use
vi /etc/config/wireless

to open the file.
8) Enter this wireless connection details below:

config wifi-iface
      option device radio0
      option network wan
      option mode sta
      option ssid 'WEAK-NETWORK-SSID'
      option encryption psk2
      # option encryption none
      # option encryption wep
      # option encryption psk
      # option encryption psk2
      option key 'PASSWORD'

config wifi-device  radio0
    option type     mac80211
    option channel  11
    option hwmode    11g
    option path    'platform/ar933x_wmac'
    option htmode    HT20
    # REMOVE THIS LINE TO ENABLE WIFI:
#    option disabled 1

# This is the bridged interface
config wifi-iface
    option device   radio0
    option network  lan
    option mode     ap
    option ssid     'REPEATED-WIFI'
    # option encryption none
    option encryption psk2
    option key 'mobileAP'

This wireless configuration contains the details how to connect and thereby extend an existing wireless network. In the configuration example above the name of this network is WEAK-NETWORK-SSID and the password for the PSK2 secured network is PASSWORD. Other encryption methods you could use are

              # option encryption none
              # option encryption wep
              # option encryption psk
              # option encryption psk2 

The second interface REPEATED-WIFI is your new, repeated wifi network where you then connect your laptop or your mobile phone with.

  1. Reboot the router with this command
reboot

You should then see a second wifi with the name REPEATED-WIFI in the available networks list. As the small router still has a better range than some devices, you should have a nicer connection.

Virtuelle Maschinen – Die perfekte Testumgebung

Wer sich richtig mit MySQL auseinandersetzen möchte muss natürlich alles ausprobieren, an jeder Schraube drehen und jede Detaileinstellung testen. Da intensive Tests nicht an einem Produktivsystem durchgeführt werden sollten, bietet es sich an auf eine virtuelle Umgebung auszuweichen, die vom eigentlichen System abgekapselt ist. Eine solche Testumgebung lässt sich sehr leicht mit der kostenlosen Software VirtualBox realisieren. Eine genaue Anleitung dazu habe ich im Blog zum MySQL-Buch verfasst.

Configuring a catch-all VirtualHost for Apache2

Recently I noticed an interesting behavior of my Apache2 setup when I introduced a new VirtualHost and enabled the site. Usually I store the configuration for each host in an individual file, because it is much easier to maintain. The main page that I serve is this WordPress blog, having the ServerName blog.stefanproell.at and several aliases. The configuration files have the same name as the ServerName of the VirtualHost they configure. When a request comes in, Apache2 passes it to the first VirtualHost having a server name or alias fitting to the request. If no atch can be found, the request is handled by the first VirtualHost that can be found. This is obviously the one having the config file with the alphabetically first file name. Without noticing, the first config file to be read was exactly this blog. Other services and hosts have their configuration files names starting with letter being later in the alphabet than the blog, hence they were read later. So every unknown request was handed over to the blog, which is nice as I want to have it like this anyway.

But when I introduced a new host having a filename say 123.stefanproell.at for its config file, each request not to be answered by any host was handled by the new host 123.stefanproell.at. So far, this is no surprise at all, but then it got weird. As suggested by a lot of different sources and the official documentation, I declared a default page, having a filename with an alphabetically even lower letter: 000-default.

Also the directive especially designed for this porpose of having a wildcard VirtualHost did not work as expected. This directive

<VirtualHost>
  DocumentRoot /www/default
</VirtualHost> 

should according to the documentation

prevent any request going to the main server. A default vhost never serves a request that was sent to an address/port that is used for name-based vhosts. If the request contained an unknown or no Host: header it is always served from the primary name-based vhost (the vhost for that address/port appearing first in the configuration file).

But instead of serving an empty default page for every missed request, Apache2 stated to serve every request to the default page, although the other VirtualHosts have been defined. The same behavioralso occurred when the specific

_default_ 

directive was not applied.

What I ended up with was a solution where I explicitly defined a dummy,catch all host having the lowest file name for its configuration. So now I have a fully configured VirtualHost having the configuration file name 000-dummy.stefanproell.at and the content

ServerName dummy.stefanproell.at
DocumentRoot /var/www/default

Do not forget to provide a DNS entry pointing to dummy.stefanproell.at as well.

Upgrading Samsung Ultra Series 5 Notebook with a SSD

During the holidays I finally found the time to upgrade my Samsung notebook by replacing the standard hard disk with a much faster SSD. There are plenty of tutorials available and the complete process is actually quite easy. For copying (and backing up) the data I used a docking station that can handle all kinds of disks, which is quite convenient for moving data between the two hard drives. My current setup consists of a “emergency” Windows 7 installation for anything which does not run with Linux and my main Linux Mate setup on a separate partition. In my case I only wanted to copy the Windows partition and install Mint from scratch as this is very easy with Linux o do.

What you will need:

  • A new SSD (like the Sandisk Ultra Plus 256GB)
  • Docking station
  • USB stick with 2GB for booting a Linux Live distribution
  • USB stick with 8B for booting a Windows repair console
  • External backup disk
  • Laptop

This are the steps I followed:

  1. Backup all important data to an external disk
  2. Boot the laptop from an Live Linux distribution by using the imagewriter software and an USB stick.
  3. Attach the new SSD to the docking station and attach the docking station to the laptop
  4. Launch gparted from the live system
  5. Copy the partitions you need by clicking right on it and select copy.
  6. In the upper right corner of gparted select the SSD
  7. Paste and choose 1 MB space between the previous partition, make sure to select “round to MiB”
  8. Repeat for all required partitions
  9. Shutdown the laptop and follow this instructions to open the case and replace the disk.

Then you can try booting from the new hard disk. If that fails you can try to use the Windows 7 USB stick in order to run the repair tools.

The Linux Mate installation is straight forward. Use the Linux USB stick to boot the Live system and install it as usual. After the system is installed, boot it and adjust the settings according to this tutorial.

The performance gain is even without tuning impressive. Boot time reduced from 2 minutes with the conventional drive down to 20 seconds with the new SSD setup.

MySQL on DigitalOcean Servers Crashing

The smalles instance of a DigitalOcean droplet only provides 512GB of RAM. Do not forget that this rather low in comparison with other Web servers. If you do not check your configuration files for Apache and MySQL and leave them at defaults, you can easily run out of memory, because DO droplets come without swap by default. In the case of MySQL, your will be confronted with this error message:
InnoDB: Fatal error: cannot allocate memory for the buffer pool
The solution is rather simple, as provided by a comment in the support forum. Just follow this guide in order to enable some swap space. As the system is completely on a SSD, you should not have a huge performance impact.

Resolving the SVN node not found error quickly

There are many settings where you still require Microoft Windows and for those purposes, where I can’t use Linux directly I run a VirtualBox instance of Windows 7. The Windows 7 guest shares my subversion repository with my Mint 15 host system via shared folders and this works surprisingly well. Nevertheless it happend that I deleted a file on the host and tried to check it in on the guest. This corrupted the subversion database and I was unable to perform any operations (update, commit, cleanup) in the repository of the guest anymore. The 1.7 version of subversion comes with a small sqllite database for the management of the repository. You simply have to delete the node from this database directly to get a working copy back. With the following steps, you do not have to delete the working copy and check it out again, which can save you a lot of time with large repositories.

  1. Download sqllite and open the database of your local working copy.
    ./sqlite3 /.svn/wc.db
  2. Find the root of all evil (provide the filename of the file that blocks your subversion repository). I here added a wildcard % so that we can ommit the full path. Only do that if you know that there is only one file with this name.
    sqlite> SELECT checksum FROM NODES WHERE local_relpath LIKE "%filename";
    The system returns a checksum like this:
    $sha1$36fed30ab5ed74a1234569729b9dd1bb54ab5634
    This is the checksum of the file that causes the trouble. The checksum ensures that you delete the correct node, because it is unique for every file, in contrast to filenames for example.
  3. Dry run (does not change the database)
    The following SELECT statement returns the data from the specific file for a last check.
    SELECT * FROM NODES WHERE checksum = "$sha1$36fed30ab5ed74a1234569729b9dd1bb54ab5634";
  4. Delete the Node with the given checksum
    Warning, there is no undo possible!
    DELETE FROM NODES WHERE checksum = "$sha1$36fed30ab5ed74ab110f69729b9dd1bb54ab5634";
    This node is now deleted.
  5. Quit
    .quit
    Leave the database management system and proceed with your work. Now you should be able to update the repository again.

Hauling Servers

After more than four years running my Web applications on Server4You infrastructure, I decided to move on. The reason for my decision to look for a new provider was the decreasing performance of my Server4You Pro X4 Server, especially in comparison with contemporary offers of other vServer providers. I therefore moved this blog and other PHP based applications to a DigitalOcean droplet, which offers 20 GB SSD storage, 512MB RAM and full root access for only 5$ per month. Perfect for Web hosting and serving small services. Moving the data and apps was actually done within two hours, hence its worth comparing offers of different providers and change them from time to time.

MySQL 5.6 – Die zweite Auflage des Handbuchs für Administratoren

Fanfare, Freude, Feuerwerk! Vor wenigen Tagen ist die zweite Auflage unseres MySQL Administrationshandbuchs erschienen. Eva Zangerle, Wolfgang Gassler und ich haben uns in den letzten Monaten wieder an die Arbeit gemacht und alle wichtigen und vielschichtigen Neuerungen der Version 5.6 von MySQL in eine neue Auflage eingearbeitet. Vor zwei Jahren ist die erste Version dieses Buchs erschienen. Wir haben tolles Feedback bekommen, mit Lesern diskutiert und dabei selbst sehr viel gelernt. Nicht nur das Buchcover ist frisch und erstrahlt in neuem Glanz, denn die brandneue Auflage umfasst sehr viele Neuerungen. Wir haben sowohl die teilweise subtilen Änderungen von Variablen, Einstellungen und Tuningparameter an die aktuelle Version 5.6 angepasst, als auch ganze Kapitel überarbeitet. In die zweite Auflage haben wir gänzlich neue Themen und Features aufgenommen, welche die neue GA Version von MySQL auszeichnen.

Zu den besonderen Neuerungen in dieser Auflage zählen vor allem:

  •  die neue NoSQL-Schnittstelle für skalierbare Webanwendungen
  •  der Volltextindex jetzt auch für InnoDB
  •  das neue PERFORMANCE_SCHEMA zur Performanceanalyse
  •  aktuelle Programmierschnittstellen wie Node.js
  •  erweiterte Sicherheitsfeatures

Natürlich werden diese Neuerungen mit dem altbewährten und durchgängigen Praxisbeispiel vorgestellt und erläutert. Aktuelle Details und News finden Sie auf unserem Buchblog <www.mysqladmin.at>

Compile the Memcached-Libraries

MySQL provides NoSQL features since version MySQL 5.6 via the Memcached plugin. To use these advanced features for your instance you compiled from source, some modifications are required. Install libevent-dev to obtain the appropriate event libraries via apt.

apt-get install libevent-dev

Then, modify the cmake script and switch the option flag for the Memcached Plugin from OFF to ON. Open the File CMakeLists.txt and search for this line:

OPTION(WITH_INNODB_MEMCACHED "" OFF)

Change this line to:

OPTION(WITH_INNODB_MEMCACHED "" ON)

Then compile MySQL by using these commands:

rm CMakeCache.txt; make clean: cmake . ; make

Then copy the shared libraries into your plugin folder:

cp ./plugin/innodb_memcached/daemon_memcached/libmemcached.so /usr/local/mysql/lib/plugin/
 cp ./plugin/innodb_memcached/innodb_memcache/innodb_engine.so /usr/local/mysql/lib/plugin/

Then from the MySQL-Console, issue these two commands (without the mysql>):

mysql> SOURCE /usr/local/mysql/share/innodb_memcached_config.sql;
mysql> NSTALL PLUGIN daemon_memcached soname "libmemcached.so";