Monday, December 9, 2013

How I upgraded my Synology NAS to a bigger disk

I had 2 1Tb drives in my Synology DS212j in RAID1 (or Synology Hybrid Raid), and I ran out of space.

I went and bought a new 4Tb drive, a single one in order to wait for another one from a different production batch (lower probability of both failing at the same time).

I thought it will be trivial to replace 2 drives with a degraded RAID of one larger drive, and increase the available space. Unfortunately, it was not trivial.

At first, I replaced one of the drives and powered on in degraded state. I used Synology GUI to repair the RAID and thus sync all data to the new 4Tb drive. That worked and took some hours.

Then, I removed the second 1Tb drive to keep the RAID in degraded state with the new larger drive. I powered on my NAS, but it didn't offer to expand the space. In fact, all the options in Storage Manager were greyed out. Bad luck.

So I went the command-line way.

Now some theory: Synology Hybrid Raid (SHR) actually uses LVM (Linux logical volume management) on top of RAID1 (managed my mdadm). Unfortunately, this additional layer of LVM between RAID and the filesystem has complicated things for me.

So, in order to extend my volume, I need to:
  1. Resize physical partition on the drive to fill all available space (eg, /dev/sda3)
  2. Resize RAID1 array on top of /dev/sda3 (eg, /dev/md2)
  3. Resize LVM physical volume on top of /dev/md2
  4. Resize LVM volume group, built from the physical volume (eg, /dev/vg1000)
  5. Resize LVM logical volume, spanning over the /dev/vg1000 (eg, /dev/vg1000/lv)
  6. Resize ext4 filesystem on top of /dev/vg1000/lv

Quite a number of steps. Here is how I did it:

1. Resizing of physical partitions is possible with parted. Note that fdisk cannot handle such big drives as 4Tb. However, newer parted has the resize command removed (bastards), so you actually need to delete and recreate the partition in its place. Scary, but works.

For that, start parted /dev/sda, then issue commands
  • unit s - this makes parted use units of sectors instead of Mb/Gb/etc - this is cruicial to be exact in recreating your new partition
  • print free - will list the current partition table along with the free space in the end
  • rm 3 - will delete 3rd parition (check if it is the correct number, mine had 5 for some odd reason)
  • mkpart ext4 - will create a new partition in its place, make sure to specify the same start sector as was printed, and the last sector of the free space, so you will use the whole disk. If it complains about alignment, press ignore - it will still be minimally aligned.
  • Now reboot - I didn't find a working method of forcing Synology kernel to reread the partition table. Even though my data partition was /dev/sda5 and after recreation became /dev/sda3, Linux RAID was still able to detect it (probably using UUID) after reboot and assemble RAID array correctly.

2. After reboot you should see lots of space in /proc/partitions, but mdadm -D /dev/md2 will say you still use only a fraction of it.

Run mdadm --grow /dev/md2 --size max - this should do the trick. But it didn't work for me. In fact, that was the trickiest part to figure out. If it actually increased the size of your RAID array, proceed to step 3, but if it kept the old size, read on.

Now you need to reassemble the RAID array, asking it to update the device size. Unfortunately, you need to unmount the filesystem and disable LVM for it to work.
  • Stop all Synology apps in the GUI, like Download manager and others.
  • Stop other services accessing the filesystem (/volume1), I had to stop these:
    /usr/syno/etc/rc.d/ stop
    /usr/syno/etc/rc.d/ stop
    /usr/syno/etc/rc.d/ stop
    /usr/syno/etc/rc.d/ stop
    /usr/syno/etc/rc.d/ stop
    /usr/syno/etc/rc.d/ stop
    /usr/syno/etc/rc.d/ stop
    /usr/syno/etc/rc.d/ stop
    /usr/syno/etc/rc.d/ stop
    /usr/syno/etc/rc.d/ stop
    /usr/syno/etc/rc.d/ stop
    /usr/syno/etc/rc.d/ stop
  • umount /volume1
  • umount -f /volume1/@optware - this was tricker, thus -f. After that you will probably loose your SSH session. Go to the web GUI and enable SSH again in Control Panel/Terminal.
  • umount /volume1/@optware - after reconnecting, you need to do this once more
  • vgchange -a n - this will disable your LVM volume group, disabling also the logical volume. This will work only if the both /volume1 and /volume1/@optware are unmounted.
  • mdadm -S /dev/md2 - only now you can stop the RAID array
  • mdadm -A /dev/md2 -U devicesize /dev/sda3 - this will reassemble the RAID array, updating the device size
  • mdadm --grow /dev/md2 -z max - finally, growing of the array will work!

3. Resizing of LVM physical volume is then easy.
  • vgdisplay -v - will show you everything, use it to check your resizing steps.
  • pvresize /dev/md2 - will use all available space on /dev/md2

4. Check vgdisplay - at this point it will show how much free space you have in the LVM volume group.

5. The free space in the VG should now be allocated to the locagical volume (LV). I didn't find a way of telling it to use all available space, so I did it in steps:
  • vgdisplay -v - will show you how much free space you have in your PV
  • lvextend -L+100G /dev/vg1000/lv - will extend the LV by 100 Gb. You can specify the exact amount of free space reported by vgdisplay here, or just do it several times until no free space is available.

6. Congratulations! Now we have a bigger underlying storage, but we still need to resize the ext4 filesystem on top of LVM.
  • e2fsck -f /dev/vg1000/lv - this is the required step before doing the actual resize.
  • resize2fs /dev/vg1000/lv - this will take a long time again

But once the resize is completed you can reboot the NAS and enjoy your newly available free space!

Too bad Synology's own tools could not do this, so I had to spend many hours researching the setup and doing it manually. Hopefully this blog post will help someone to save time.

Good luck!

Saturday, February 18, 2012

How I fixed my new YN565EX flash

Yesterday I have received my brand new YN565EX flash by Yongnuo from Hong Kong. It took almost a month for it to get to Estonia, but shipping was free.

The flash is very similar in specs to a much more expensive Canon 580EX II, with minor exceptions, e.g. no Hi-Speed sync or ETTL Master mode. However, the flash supports ETTL Slave mode of both Canon and Nikon systems, so this was the primary reason for buying it - use as a powerful (guide 58) slave flash unit in automatic TTL mode. As a bonus it can also be a slave in manual mode as well - overall a better option than buying a 430EX from Canon, which is even slightly more expensive, for the same purpose. The reviews on the net were very good and I was impressed with the build quality, which matches flashes made by Canon.

But all would be cool if it worked... There is much scepticism for buying of Chinese flashes, and there seem to be a reason for that. The flash powered on for the first time and then quickly shut down. It completely ignored any presses of the On button afterwards and soon I noticed that the batteries get very hot while they are inserted into the flash unit. Bad, bad, bad. My option would be to pay for return shipping and wait for 2 more months until the new unit is delivered. Not a sexy option.

So I decided to give it a try to fix it myself. Hot batteries suggested there should be a short circuit somewhere and if it is not a major component, it should be fixable.

The flash is very similar in design to Canon 580EX, so searching for its schematics helped to understand how to disassemble it.

1. Unscrew the 4 screws on the bottom of the flash unit to remove hotshoe connectors (contacts can be easily disconnected once the bottom panel is open)

2. Turn the flash head 90 degrees to reveal 2 more screws at the upper part of the body, one is under the sticker, which looks like a 'warranty void' one, but as there are no such words on it - it felt safe to remove it. I suggest making pictures of the screws to be able to assemble everything later - they are all different.

3. Now was the most difficult part with which I struggled for some time. By now you should be able to remove the front cover of the body, but it feels like there is still something holding it at the top - actually there are no more screws, so just try again and again to gently pull it off. No sliding down, etc is necessary, just pull the front panel off the back panel, the position of the head is not important at this time. After you are successful, disconnect the two connectors and put the front cover away.
At this time I put the batteries back inside several times and used a voltmeter to understand where the voltage drop happens. After some tries I was lucky to see some smoke. This might be scary, but I quickly removed the PCB at the bottom of the flash on which the coil and 1000uF capacitor are mounted on the picture. Be careful with touching the parts - you may get an electric shock even if batteries are removed, this is due to a 300V giant capacitor which is hiding at the base of the flash head.
The cause of the smoke most probably was residual soldering agent on the back of the board - see the picture to the right. It was melted and probably was the reason for the short circuit. I used a small screwdriver to clean it off between the two large contacts. Note that checking for it with a multimeter is difficult because there are capacitors mounted here, so probing for resistance can still show some value until the capacitors are charged (and there is a big one there!). After inserting the batteries the flash turned on! The problem was identified, so I have spent some more time cleaning the contacts by gently scratching and wiping off with alcohol.

Now the flash works perfectly both in the hotshoe and as a slave. There are no problems with balance of light with my Canon 580EX II - at least I haven't noticed any difference so far. The only minor problem that I have noticed is the modelling light (when depth-of-field button is pressed) gets darker by the end of the 1 sec interval it is supposed to be visible, but it may depend on how well the batteries are charged.

Overall, I am pretty happy now. Buy yourself one if you have basic disassembling/electronics skills to protect yourself from minor manufacturing defects. I really hope the QC will improve at Yongnuo.

Wednesday, February 1, 2012

Beautiful front-end for Picasa web galleries

Ever wanted to make your Picasaweb gallery look better?

Although I don't use Picasa desktop application, I upload many of my photos to Picasaweb. Partly, this is for historical reasons, partly I like the usage of the existing Google account and very cheap storage (20 Gb for only $5 per year - it's hard to beat that).

However, I am really really not happy about how the Google's gallery looks like - both the original Picasaweb interface, which has the distracting white background as well as the newer Google Plus-integrated interface, which doesn't display my album maps and hides my photo captions too quickly. And the URLs have recently become uglier - Google no longer displays your username in the URL, but has replaced it with a bunch of numbers. Oh well...

So, because there wasn't any better solution, I decided to quickly write a new, more beautiful front-end for Picasaweb during the days off during the last Christmas. I still like the storage space and the direct uploads to Picasaweb from most open-source and commercial photo management software, so there is no need to reinvent the back-end part.

Here is what I came up with:

As Google provides a Java API for accessing to its services, I decided to write a simple Java web app and host it at Google App Engine (for free!).

The web app fetches all the data from Picasaweb, but displays the albums and photos completely in its own interface, which is (surprisingly) quicker than the Google's own and looks more beautiful, with minimal distractions from the photos themselves. See my example photo gallery to check out the transition effects, full screen photo viewer with copy-pasteable URLs, cross-fading between photos, and a search within your photo metadata (captions, labels, etc). It is optimized for mobile browsers as well.

The source code is available on Github and is licensed under GPL. You can just install it to your own Google App Engine account for free (see the instructions on the github) or use the link at the bottom of my photo gallery to display your own photos - the link will be permanent, so you can even send it to your friends!

Tuesday, January 24, 2012

How I resurrected my TomTom GO 920

This stuff probably applies to any hardware TomTom device, including GO 520, 720, 920, ONE, XL, etc

I have owned my TomTom GO 920 since 2007. It worked pretty well, but during the last trip to Spain started failing.

Symptoms included:

  • Doesn't turn on after put to sleep
  • Reset button doesn't work (that hole at the bottom of the device)
  • Hangs during operation, reset turns the screen off, can't turn on anymore

I thought my TomTom was bricked for good - anyway this seems like a hardware problem to me and no solutions on the net helped.

However, after the device was just gathering dust for some months, I have decided to give it another try: I plugged it into the USB and the device turned on again! But only until it hanged again and reset stopped working again.

So, here are the steps I used to resurrect it:
  1. Leave the device laying around without the power source until the battery completely drains
  2. Plug it into USB, turn it on by holding the power button for a few seconds (it should turn on by now if battery indeed was empty - I have tested it several times)
  3. If it asks whether to connect to the computer - choose YES
  4. Start TomTom HOME (yes, that stupid Windoze-only program - I used VirtualBox for running it - doesn't work under Wine)
  5. Go to manage my TomTom
  6. Check The Application (the TomTom GUI interface) and choose to delete it from the device
  7. Close TomTom HOME, restart it
  8. HOME will detect you don't have the Application installed and will offer you to do so
  9. Let it reinstall the Application
  10. Delete and copy the map as well, either with TomTom HOME or just manually

That's it! TomTom will now turn off and turn on again. However, occasionally it still needs the reset button to be pushed after put into sleep - but now the reset button works at least.

I guess the problem was with data corruption: when you turn on or reset the device, the bootloader takes control and tries to run the Application, but it mysteriously corrupted and fails. If the above steps don't help, you can try updating the bootloader as well - there are instructions on the net on how to do it, but you still need the working USB mass storage, so let the battery to drain first.

Hopefully this solution is not too temporary, but it works for 3 days now already!

Thursday, January 12, 2012

Java course IAG0040 in Tallinn Technical University

Now is the time to make an official announcement: I am no longer teaching Java to Master's students in Tallinn Technical University (TTÜ).

I have really enjoyed doing it during the past 6 years, and I definitely learned a lot during this time.
Teaching in a university is not something you would do for money and it also eats up a lot of your spare time - I did it because I know I am good at it and I wanted to share my knowledge and experience in the field, especially considering the fact that most university professors/lecturers have little understanding of how software development really works and how to teach it. I did it for those students each year who really had their eyes shining. Because of them, it was worth it.

However, Java is no longer the coolest programming language out there, it slowly dies due to lack of or very slow development. Its features are outdated, it is not as productive as I would like it to be. It's time to move on. Probably Java still is one of the best programming languages to teach in the universities, but students should understand that world software development field is evolving quickly and they need to keep an eye of what is happening in the industry.

I am still a big fan of JVM - it is well tuned for performance and it is cross-platform. .NET/C# is not nearly an option. I really like Scala, Clojure seems very interesting, as well as Vala, Go and Dart. Kotlin seems very promising as well. Sooner or later there will be another popular language on JVM that most Java developers will be able to shift to. Hopefully there won't be too many of such languages and hopefully they will be statically typed. I recommend every Java developer to check the Play framework for a fresh look at Java. Anyway, the current trend in software development is to merge academic computer science and industrial programming together again - you should pay a lot more attention to functional programming in addition to the good old object oriented programming (OOP) now. The concept of DevOps is also becoming more and more important, meaning that soon you will not be able to survive as a developer, if you don't know how to create a software product from scratch until the end, deploying it yourself to the production system.

However, I am not going to leave the teaching 'business'. I will still give talks at conferences, organize trainings, and hopefully contribute to making IT education better with Codeborne Academia, but more on that later. I am still open to offers from universities and colleges as well, feel free to contact me.

Now the important part

All the code written during the course by me and the students during the past 6 years is now available on the Github:

The lastest (2011) lecture slides are available on Slideshare:

Or go to specific lectures using the links below:

  1. Introduction
  2. Java basics, program flow
  3. OOP in Java
  4. Exceptions and Collections
  5. Generics, Enums, Assertions
  6. Unit testing and Agile software development
  7. Text processing, Charsets & Encodings
  8. I/O, Files, Streams
  9. Networking, Reflection
  10. Threads and Concurrency
  11. Design Patterns
  12. Web, Servlets, XML
  13. JDBC, Logging
  14. Java Beans, Applets, GUI
  15. Advacned: Ant, Scripting, Spring & Hibernate

Wednesday, January 4, 2012

jQuery filters out script elements

Sometimes you want to fetch new HTML content with AJAX. You then parse incoming html and insert it into the document:

$.get('/some-content', function(html) {

or just use the load() function as a short-cut:

$('body').load('/some-content #content');

The problem is, this doesn't execute any embedded script tags.
Actually, doing this:


you will get a jQuery set containing two elements, not one DOMElement as most people would expect!
The set will contain the original div as a DOMElement (with all its content except the script) and a script seprately, also as a DOMElement. If there were more scripts in the original unparsed string, you would get all script elements separately in the parsed set.

The workaround would be to execute all script elements manually if you do any DOM manipulation with the incoming HTML:

$.get('/some-content', function(html) {

Sad, but true...

Some more background info from comments on jQuery site:
All of jQuery's insertion methods use a domManip function internally to clean/process elements before and after they are inserted into the DOM. One of the things the domManip function does is pull out any script elements about to be inserted and run them through an "evalScript routine" rather than inject them with the rest of the DOM fragment. It inserts the scripts separately, evaluates them, and then removes them from the DOM.

Wednesday, December 7, 2011

How to use XFS on a Synology NAS device

After returning from a trip to Israel and Jordan last month, I have discovered that my home server's motherboard is dead. After considering different options, I have realized that I really only need the server to host my RAID1 hard disks, where I store all my photos and other content.

So that's how I got my Synology DS212J NAS, which is an Linux-powered device with low level access in case it is needed. Now the problem was, how to get my 2 HDDs running in that box with preserving all the data.

Unfortunately, the first install of the box requires wiping of the data on the HDD. Fortunately, I have RAID1 setup with two exact copies of all the data, so I decided to sacrifice one of the disks to install the firmware and later insert the second one and copy the data over.

Now, the problem was that my data was on a XFS partition, which DS212J doesn't support by default - it uses EXT4 for its partitions.

Fortunately, Synology is kind enough to provide an SDK which makes it possible to build missing kernel modules and enable more features on the device.

Here is how to do it:

  1. Enable terminal access in the web interface of the device - then you can login to the box using ssh root@diskstation.local or whatever name/IP you have given it
  2. Go to - click Files
  3. Download the DSM tool chains (gcc ARM cross compiler) for your firmware (currently 3.2) - for DS212 it is in the Marvell 88F628x, if you have another model use either /proc/cpuinfo or dmesg to find out your CPU type
  4. Download the Synology NAS GPL Source also from sourceforge - this is a huge archive that includes source code of all the kernels that Synology uses (check your kernel version with uname -a, mine uses 2.6.32)
  5. Extract both archives to /usr/local (or at least link there) - this is important, because kernel Makefile already points to a cross compiler from the tool chain that should be located there
  6. cd /usr/local/source/linux-2.6.32 (from the GPL source archive)
  7. Copy the correct kernel config for your device - Syology kernel configs are located in synoconfigs, so for DS212J do this: cp synoconfigs/88f6281 .config
  8. make menuconfig to use kernel's menu-based configuration utility (ensure that you have libncurses installed for it to work)
  9. In the menu, locate the needed additional drivers (File Systems -> XFS in my case), and press M to enable module compilation
  10. Exit and make modules
  11. This will compile the required .ko files. XFS driver was in fs/xfs/xfs.ko after compilation was completed, but modinfo fs/xfs/xfs.ko also told me that xfs.ko depends on the exportfs.ko as well.
  12. I needed to copy both modules to the device, xfs.ko and fs/exportfs/exportfs.ko to make it work, otherwise kernel refused to load the xfs module. To copy the files, use either FTP (enable from the web interface) or the web interface itself. It doesn't matter where you put them.
  13. Login to the device with ssh root@diskstation.local again (note, you need to be root, not admin)
  14. Go to the directory where you uploaded the .ko files (cd /volume1/public in my case)
  15. Load the modules: insmod exportfs.ko, then insmod xfs.ko - if it doesn't complain, then you have XFS support enabled (or whatever other driver you needed to load)
  16. Then create a directory anywhere and mount your second HDD to it, then copy the files with cp or rsync. Example: mount -t xfs /dev/hda storage - look which name did you HDD receive. Mine was hda, because I have installed the firmare to hdb before. Check mount without the arguments on where you firmware is located. Also, my disk didn't have a partition table, only a single partition starting in MBR, that's why I used /dev/hda there and not /dev/hda1 or something. Use parted /dev/hda for more info which partitions you have.
  17. Rsync is a great way to copy the files then, eg rsync -arv storage/ /volume1/ - this will preserve all file attributes
  18. When copying is complete, add the second HDD to the Synology RAID using the web interface

Note: fortunately I didn't have to replace the stock kernel fully on the device. It was enough to load these two new modules. However at first, when loading of only xfs.ko failed due to missing symbols, I already thought that I need to do that, and I have even tried with no help (before discovering that I actually need to load exportfs.ko). 

FYI: Synology's kernel is located in the flash memory, not on the HDD as the rest of the system. The devices with flash partitions are /dev/mtd*, with /dev/mtd0 being the boot loader (don't touch this one - it provides the ability to install the rest of the firmare with Synology Assistant over the network) and /dev/mtd1 being the uImage of the kernel.

If you still need to replace the kernel, you may try to make uImage of the kernel (make sure you have uboot installed for this to work - this is what Synology uses), copy tyhe uImage file to the device and then cat uImage > /dev/mtd1 - but do it at your own risk, I am not sure whether will Synology Assistant work if you flash the wrong kernel and reboot. I guess it should, but I haven't tested it :-)

Hopefully it will be useful for someone - the same way you can add support for Apple/Mac HFS filesystem, ReiserFS or others.