Tuesday, January 24, 2012

How I resurrected my TomTom GO 920

This stuff probably applies to any hardware TomTom device, including GO 520, 720, 920, ONE, XL, etc

I have owned my TomTom GO 920 since 2007. It worked pretty well, but during the last trip to Spain started failing.

Symptoms included:

  • Doesn't turn on after put to sleep
  • Reset button doesn't work (that hole at the bottom of the device)
  • Hangs during operation, reset turns the screen off, can't turn on anymore

I thought my TomTom was bricked for good - anyway this seems like a hardware problem to me and no solutions on the net helped.

However, after the device was just gathering dust for some months, I have decided to give it another try: I plugged it into the USB and the device turned on again! But only until it hanged again and reset stopped working again.

So, here are the steps I used to resurrect it:
  1. Leave the device laying around without the power source until the battery completely drains
  2. Plug it into USB, turn it on by holding the power button for a few seconds (it should turn on by now if battery indeed was empty - I have tested it several times)
  3. If it asks whether to connect to the computer - choose YES
  4. Start TomTom HOME (yes, that stupid Windoze-only program - I used VirtualBox for running it - doesn't work under Wine)
  5. Go to manage my TomTom
  6. Check The Application (the TomTom GUI interface) and choose to delete it from the device
  7. Close TomTom HOME, restart it
  8. HOME will detect you don't have the Application installed and will offer you to do so
  9. Let it reinstall the Application
  10. Delete and copy the map as well, either with TomTom HOME or just manually

That's it! TomTom will now turn off and turn on again. However, occasionally it still needs the reset button to be pushed after put into sleep - but now the reset button works at least.

I guess the problem was with data corruption: when you turn on or reset the device, the bootloader takes control and tries to run the Application, but it mysteriously corrupted and fails. If the above steps don't help, you can try updating the bootloader as well - there are instructions on the net on how to do it, but you still need the working USB mass storage, so let the battery to drain first.

Hopefully this solution is not too temporary, but it works for 3 days now already!


Thursday, January 12, 2012

Java course IAG0040 in Tallinn Technical University

Now is the time to make an official announcement: I am no longer teaching Java to Master's students in Tallinn Technical University (TTÜ).

I have really enjoyed doing it during the past 6 years, and I definitely learned a lot during this time.
Teaching in a university is not something you would do for money and it also eats up a lot of your spare time - I did it because I know I am good at it and I wanted to share my knowledge and experience in the field, especially considering the fact that most university professors/lecturers have little understanding of how software development really works and how to teach it. I did it for those students each year who really had their eyes shining. Because of them, it was worth it.

However, Java is no longer the coolest programming language out there, it slowly dies due to lack of or very slow development. Its features are outdated, it is not as productive as I would like it to be. It's time to move on. Probably Java still is one of the best programming languages to teach in the universities, but students should understand that world software development field is evolving quickly and they need to keep an eye of what is happening in the industry.

I am still a big fan of JVM - it is well tuned for performance and it is cross-platform. .NET/C# is not nearly an option. I really like Scala, Clojure seems very interesting, as well as Vala, Go and Dart. Kotlin seems very promising as well. Sooner or later there will be another popular language on JVM that most Java developers will be able to shift to. Hopefully there won't be too many of such languages and hopefully they will be statically typed. I recommend every Java developer to check the Play framework for a fresh look at Java. Anyway, the current trend in software development is to merge academic computer science and industrial programming together again - you should pay a lot more attention to functional programming in addition to the good old object oriented programming (OOP) now. The concept of DevOps is also becoming more and more important, meaning that soon you will not be able to survive as a developer, if you don't know how to create a software product from scratch until the end, deploying it yourself to the production system.

However, I am not going to leave the teaching 'business'. I will still give talks at conferences, organize trainings, and hopefully contribute to making IT education better with Codeborne Academia, but more on that later. I am still open to offers from universities and colleges as well, feel free to contact me.

Now the important part

All the code written during the course by me and the students during the past 6 years is now available on the Github:
https://github.com/angryziber/java-course

The lastest (2011) lecture slides are available on Slideshare:
http://www.slideshare.net/antonkeks/presentations

Or go to specific lectures using the links below:

  1. Introduction
  2. Java basics, program flow
  3. OOP in Java
  4. Exceptions and Collections
  5. Generics, Enums, Assertions
  6. Unit testing and Agile software development
  7. Text processing, Charsets & Encodings
  8. I/O, Files, Streams
  9. Networking, Reflection
  10. Threads and Concurrency
  11. Design Patterns
  12. Web, Servlets, XML
  13. JDBC, Logging
  14. Java Beans, Applets, GUI
  15. Advacned: Ant, Scripting, Spring & Hibernate


Wednesday, January 4, 2012

jQuery filters out script elements

Sometimes you want to fetch new HTML content with AJAX. You then parse incoming html and insert it into the document:

$.get('/some-content', function(html) {
  $(html).find('#content').appendTo('body');
});

or just use the load() function as a short-cut:

$('body').load('/some-content #content');

The problem is, this doesn't execute any embedded script tags.
Actually, doing this:

$('<div><script>alert(1)</script></div>')

you will get a jQuery set containing two elements, not one DOMElement as most people would expect!
The set will contain the original div as a DOMElement (with all its content except the script) and a script seprately, also as a DOMElement. If there were more scripts in the original unparsed string, you would get all script elements separately in the parsed set.

The workaround would be to execute all script elements manually if you do any DOM manipulation with the incoming HTML:

$.get('/some-content', function(html) {
  $(html).find('#content').appendTo('body');
  $(html).filter('script').appendTo('body');
});

Sad, but true...

Some more background info from comments on jQuery site:
All of jQuery's insertion methods use a domManip function internally to clean/process elements before and after they are inserted into the DOM. One of the things the domManip function does is pull out any script elements about to be inserted and run them through an "evalScript routine" rather than inject them with the rest of the DOM fragment. It inserts the scripts separately, evaluates them, and then removes them from the DOM.


Wednesday, December 7, 2011

How to use XFS on a Synology NAS device

After returning from a trip to Israel and Jordan last month, I have discovered that my home server's motherboard is dead. After considering different options, I have realized that I really only need the server to host my RAID1 hard disks, where I store all my photos and other content.

So that's how I got my Synology DS212J NAS, which is an Linux-powered device with low level access in case it is needed. Now the problem was, how to get my 2 HDDs running in that box with preserving all the data.

Unfortunately, the first install of the box requires wiping of the data on the HDD. Fortunately, I have RAID1 setup with two exact copies of all the data, so I decided to sacrifice one of the disks to install the firmware and later insert the second one and copy the data over.

Now, the problem was that my data was on a XFS partition, which DS212J doesn't support by default - it uses EXT4 for its partitions.

Fortunately, Synology is kind enough to provide an SDK which makes it possible to build missing kernel modules and enable more features on the device.

Here is how to do it:

  1. Enable terminal access in the web interface of the device - then you can login to the box using ssh root@diskstation.local or whatever name/IP you have given it
  2. Go to http://sourceforge.net/projects/dsgpl/ - click Files
  3. Download the DSM tool chains (gcc ARM cross compiler) for your firmware (currently 3.2) - for DS212 it is in the Marvell 88F628x, if you have another model use either /proc/cpuinfo or dmesg to find out your CPU type
  4. Download the Synology NAS GPL Source also from sourceforge - this is a huge archive that includes source code of all the kernels that Synology uses (check your kernel version with uname -a, mine uses 2.6.32)
  5. Extract both archives to /usr/local (or at least link there) - this is important, because kernel Makefile already points to a cross compiler from the tool chain that should be located there
  6. cd /usr/local/source/linux-2.6.32 (from the GPL source archive)
  7. Copy the correct kernel config for your device - Syology kernel configs are located in synoconfigs, so for DS212J do this: cp synoconfigs/88f6281 .config
  8. make menuconfig to use kernel's menu-based configuration utility (ensure that you have libncurses installed for it to work)
  9. In the menu, locate the needed additional drivers (File Systems -> XFS in my case), and press M to enable module compilation
  10. Exit and make modules
  11. This will compile the required .ko files. XFS driver was in fs/xfs/xfs.ko after compilation was completed, but modinfo fs/xfs/xfs.ko also told me that xfs.ko depends on the exportfs.ko as well.
  12. I needed to copy both modules to the device, xfs.ko and fs/exportfs/exportfs.ko to make it work, otherwise kernel refused to load the xfs module. To copy the files, use either FTP (enable from the web interface) or the web interface itself. It doesn't matter where you put them.
  13. Login to the device with ssh root@diskstation.local again (note, you need to be root, not admin)
  14. Go to the directory where you uploaded the .ko files (cd /volume1/public in my case)
  15. Load the modules: insmod exportfs.ko, then insmod xfs.ko - if it doesn't complain, then you have XFS support enabled (or whatever other driver you needed to load)
  16. Then create a directory anywhere and mount your second HDD to it, then copy the files with cp or rsync. Example: mount -t xfs /dev/hda storage - look which name did you HDD receive. Mine was hda, because I have installed the firmare to hdb before. Check mount without the arguments on where you firmware is located. Also, my disk didn't have a partition table, only a single partition starting in MBR, that's why I used /dev/hda there and not /dev/hda1 or something. Use parted /dev/hda for more info which partitions you have.
  17. Rsync is a great way to copy the files then, eg rsync -arv storage/ /volume1/ - this will preserve all file attributes
  18. When copying is complete, add the second HDD to the Synology RAID using the web interface

Note: fortunately I didn't have to replace the stock kernel fully on the device. It was enough to load these two new modules. However at first, when loading of only xfs.ko failed due to missing symbols, I already thought that I need to do that, and I have even tried with no help (before discovering that I actually need to load exportfs.ko). 

FYI: Synology's kernel is located in the flash memory, not on the HDD as the rest of the system. The devices with flash partitions are /dev/mtd*, with /dev/mtd0 being the boot loader (don't touch this one - it provides the ability to install the rest of the firmare with Synology Assistant over the network) and /dev/mtd1 being the uImage of the kernel.

If you still need to replace the kernel, you may try to make uImage of the kernel (make sure you have uboot installed for this to work - this is what Synology uses), copy tyhe uImage file to the device and then cat uImage > /dev/mtd1 - but do it at your own risk, I am not sure whether will Synology Assistant work if you flash the wrong kernel and reboot. I guess it should, but I haven't tested it :-)

Hopefully it will be useful for someone - the same way you can add support for Apple/Mac HFS filesystem, ReiserFS or others.


Sunday, November 27, 2011

Reusing Shotwell thumbnails in Nautilus

As I have lots of photos on my machine, thumbnails start to consume considerable amount of space on the disk.

Another problem, is that gnome-raw-thumbnailer isn't enabled in Ubuntu (Natty, Oneiric) by default anymore, so my raw photos don't get thumbnailed in Nautilus. And, if I enable it manually, thumbnails of vertical photos don't show with the correct orientation.

So, I have researched a bit the freedesktop thumbnail spec, gnome thumbnailer spec and how Shotwell stores its thumbnails and came up with a shell script that reuses Shotwell thumbnails for Nautilus.

Save the script below as /usr/bin/shotwell-raw-thumbnailer

#!/bin/bash
input=$1
output=$2

if [ -z $output ]; then
    echo "Usage: $0 input output"
    exit 1
fi

file=`echo -n ${input##file://} | perl -pe 's/%([0-9a-f]{2})/sprintf("%s", pack("H2",$1))/eig'`
md5=`echo -n $input | md5sum | awk '{print $1}'`

shotwell_id=`sqlite3 ~/.shotwell/data/photo.db "select id from PhotoTable where filename = '$file'"`
if [ -z $shotwell_id ]; then
    gnome-raw-thumbnailer $input $output
    exit
fi

thumb=`printf ~/.shotwell/thumbs/thumbs128/thumb%016x.jpg $shotwell_id`
if [ \! -e $thumb ]; then
    gnome-raw-thumbnailer $input $output
    exit
fi

replaceWithLink() {
    sleep 1
    ln -sf $thumb ~/.thumbnails/normal/$md5.png
}

# gnome-thumbnail-factory doesn't support links
cp $thumb $output

# however, linked thumbnails work, so replace them after a delay
replaceWithLink &

In order to make it work, you then need to register it as a thumbnailer in Gnome, put this to /usr/share/thumbnailers/shotwell.thumbnailer
[Thumbnailer Entry]
Exec=/usr/bin/shotwell-raw-thumbnailer %u %o
MimeType=image/x-3fr;image/x-adobe-dng;image/x-arw;image/x-bay;image/x-canon-cr2;image/x-canon-crw;image/x-cap;image/x-cr2;image/x-crw;image/x-dcr;image/x-dcraw;image/x-dcs;image/x-dng;image/x-drf;image/x-eip;image/x-erf;image/x-fff;image/x-fuji-raf;image/x-iiq;image/x-k25;image/x-kdc;image/x-mef;image/x-minolta-mrw;image/x-mos;image/x-mrw;image/x-nef;image/x-nikon-nef;image/x-nrw;image/x-olympus-orf;image/x-orf;image/x-panasonic-raw;image/x-pef;image/x-pentax-pef;image/x-ptx;image/x-pxn;image/x-r3d;image/x-raf;image/x-raw;image/x-rw2;image/x-rwl;image/x-rwz;image/x-sigma-x3f;image/x-sony-arw;image/x-sony-sr2;image/x-sony-srf;image/x-sr2;image/x-srf;image/x-x3f;

So, what does this script do?
  • When Gnome (or Nautilus) needs a thumbnail, it runs this script
  • The script checks if the image has an entry in the Shotwell database (~/.shotwell/data/photo.db)
  • Then it checks if Shotwell has a thumbnail for it (in ~/.shotwell/thumbs)
  • If yes, the script returns the already generated thumbnail to Gnome - no generation needed, so it works much faster
  • If Shotwell doesn't have the thumbnail, the call is delegated to gnome-raw-thumbnailer that generates a new thumbnail, the old-fashioned way
  • If Shotwell's thumbnail was used, the script will asynchronously replace the thumbnail in ~/.thumbnails with the link to Shotwell's file, avoiding a copy on the disk

The last step is the one that saves disk space. Unfortunately, it is not possible to return a link right away to Gnome - it can't read it for some reason. However, by putting a link directly under ~/.thumbnails later works perfectly, even if we put a .jpg file under the name of .png (as required by a spec). Png is actually a worse choice for thumbnailing of photos due to its lossless compression, so the disk savings are more than twofold with this script.

The next step would be to rewrite this in C or Vala to make even faster and maybe even make Shotwell create these links right away when it generates the thumbnails.


Tuesday, October 19, 2010

Simple Rsync GUI: easy backups from Nautilus

Most often, making backups of your important files is a manual process. Especially if you are dealing with large collections of photos.

In the meantime I have written a small and convenient Nautilus script (for Gnome users) for doing exactly that.

Features:

  • Syncs to any mounted location or over SSH (everything that rsync supports)
  • Remembers previously used locations
  • Preview of changes (any deletions are shown first, but performed the last)
  • Nice progress bar with upload speed display

Everything is written as a simple bash script using Zenity for GTK GUI - just drop it to ~/.gnome2/nautilus-scripts directory, and it appear in Nautilus right-click menu, under Scripts.




Don't forget - this all is just a frontend for rsync (that you are too lazy to run from command-line).

Dependencies: nautilus, zenity, rsync, bash

And now, here is the source (save to ~/.gnome2/nautilus-scripts/Sync):
#!/bin/bash
# Nautilus script to sync specified folder to another destination via rsync.
# Put this to ~/.gnome2/nautilus-scripts
# Written by Anton Keks (BSD license)

paths_file=$(readlink -f $0).paths
locations=`cat $paths_file`
sources=`cat $paths_file | awk -F'|' '{print $1}'`

if [ "$1" ]; then
  source=$1 
else
  # add current directory also to the list
  sources=`echo -e "$sources\\n$PWD" | sort -u`
  # ask user to chose one of the sources
  source=`zenity --list --title="Sync source" --text="No source was specified. Please choose what do you want to sync" --column=Source "$sources" Other...` || exit 1
  if [ "$source" = Other... ]; then
    source=`zenity --entry --title="Sync source" --text="Please enter the source path on local computer" --entry-text="$PWD"` || exit 1
  fi
fi

# normalize and remove trailing /
source=`readlink -f "$source"`
source=${source%/}

if [ ! -d "$source" ]; then
  zenity --error --text="$source is not a directory"; exit 2
fi

if [ $2 ]; then
  # TODO: support multiple sources
  zenity --warning --text="Only one directory can be synched, using $source"
fi

# find matching destinations from stored ones
destinations=""
for s in $sources; do
  if echo "$source" | fgrep $s; then
    dest=`fgrep "$s" $paths_file | awk -F'|' '{print $2}'`
    suffix=${source#$s}
    suffix=${suffix%/*}
    destinations="$destinations $dest$suffix" 
  fi
done

# ask user to chose one of the matching destinations of enter a new one
dest=`zenity --list --title="Sync destination" --text="Choose where to sync $source" --column=Destination $destinations New...` || exit 3
if [ $dest = New... ]; then
  basename=`basename "$source"`
  dest=`zenity --entry --title="Sync destination" --text="Please enter the destination (either local path or rsync's remote descriptor), omitting $basename" --entry-text="user@host:$(dirname $source)"` || exit 3
  echo "$source|$dest" >> $paths_file
fi

# check if user is not trying to do something wrong with rsync
if [ `basename "$source"` = `basename "$dest"` ]; then
  # sync contents of source to dest
  source="$source/"
fi

log_file=/tmp/Sync.log
rsync_opts=-rltEorzh
echo -e "The following changes will be performed by rsync (see man rsync for info on itemize-changes):\\n$source -> $dest\\n" > $log_file
( echo x; rsync -ni $rsync_opts --delete "$source" "$dest" 2>&1 >> $log_file; rsync_result=$? ) | zenity --progress --pulsate --auto-close --width=350 --title="Retrieving sync information" 

if [ $rsync_result -ne 0 ]; then
  zenity --error --title="Sync" --text="Rsync failed: `cat $log_file`"; exit 4
fi

num_files=`cat $log_file | wc -l`
num_files=$((num_files-3))

if [ $num_files -le 0 ]; then
  zenity --info --title="Sync" --text="All files are up to date on $dest"; exit
fi

zenity --text-info --title="Sync review ($num_files changes)" --filename=$log_file --width=500 --height=500 || exit 4

num_deleted=`fgrep delet $log_file | wc -l`
if [ $num_deleted -ge 100 ]; then
  zenity --question --title="Sync" --text="$num_deleted files are going to be deleted from $dest, do you still want to continue?" --ok-label="Continue" || exit 4
fi

rsync_progress_awk="{ 
 if (\$0 ~ /to-check/) {
  last_speed=\$(NF-3)
 }
 else {
  print \"#\" \$0 \" - \" files \"/\" $num_files \" - \" last_speed;
  files++;
  print files/$num_files*100 \"%\";
 }
 fflush();
}
END {
 print \"#Done, \" files \" changes, \" last_speed
}"

# note: delete-delay below means that any files will be deleted only as a last step
rsync $rsync_opts --delete-delay --progress "$source" "$dest" | awk "$rsync_progress_awk" | zenity --progress --width=350 --title="Synchronizing $source" || exit 4


Thursday, December 17, 2009

Deleting thumbnails for inexisting photos

Freedesktop for some years already has a spec on how applications should manage image thumbnails (use Next link there). The spec is now followed by majority of Gnome and KDE applications, including F-Spot, which is one of the very few applications that uses large 256x256 thumbnails under ~/.thumbails/large.

The spec specifies to store thumbnails in PNG format, naming the files after the MD5 sum of the original URLs of the original files, eg 81347ce6c37f75513c5e517e5b1895b8.png.

The problem with the spec is that if you delete or move image files, thumbnails stay there and take space (for my 20000+ photos I have 1.4Gb of large thumbails).

Fortunately, you can from time to time clean them by using simple command-line tricks, as the original URLs are stored inside of thumbnail files as Thumb:URI attributes. I don't recommend erasing all of your thumbnails, because regeneration will take time.

In order to create a list of matching thumbnail-original URL pairs, you can run the following in a terminal inside of either .thumbnails/large or .thumbnails/normal directories (it will take some time):

for i in `ls *.png`; do
identify -verbose "$i" | \
fgrep Thumb::URI | sed "s@.*Thumb::URI:@$i@" >> uris.txt;
done
This will get you a uris.txt file, where each line looks like the following:
f78c63184b17981fddce24741c7ebd06.png file:///home/user/Photos/2009/IMG_5887.CR2
Note that the provided thumbnail filenames (first tokens) can also be generated the following way from the URLs (second tokens) using MD5 hashes:
echo -n file:///home/user/Photos/2009/IMG_5887.CR2 | md5sum
After you have your uris.txt file, it can be easily processed with any familiar command-line tools, like grep, sed, awk, etc.

For example, in order to delete all thumbnails matching 'Africa', use the following:
for i in `cat uris.txt | fgrep Africa`; do rm $i >/dev/null; done
So, as you can see, it is pretty simple to free a few hundred megabytes (depending on the number of thumbnails you are deleting).
With this kind of trick you can even rename the thumbnails of moved files if you use md5sum to generate the new filenames from the URLs, as shown above. This will save you regeneration time.