tag:blogger.com,1999:blog-64583620261075831832024-03-06T11:15:29.530+02:00Tech blog of Anton KeksFor those loving freedom: Linux and Gnome related stuffAnonymoushttp://www.blogger.com/profile/10287795376177313666noreply@blogger.comBlogger21125tag:blogger.com,1999:blog-6458362026107583183.post-83846123013446703442013-12-09T00:30:00.000+02:002013-12-09T00:38:31.623+02:00How I upgraded my Synology NAS to a bigger diskI had 2 1Tb drives in my Synology DS212j in RAID1 (or Synology Hybrid Raid), and I ran out of space.<br />
<div>
<br /></div>
<div>
I went and bought a new 4Tb drive, a single one in order to wait for another one from a different production batch (lower probability of both failing at the same time).</div>
<div>
<br /></div>
<div>
I thought it will be trivial to replace 2 drives with a degraded RAID of one larger drive, and increase the available space. Unfortunately, it was not trivial.</div>
<div>
<br /></div>
<div>
At first, I replaced one of the drives and powered on in degraded state. I used Synology GUI to repair the RAID and thus sync all data to the new 4Tb drive. That worked and took some hours.</div>
<div>
<br /></div>
<div>
Then, I removed the second 1Tb drive to keep the RAID in degraded state with the new larger drive. I powered on my NAS, but it didn't offer to expand the space. In fact, all the options in Storage Manager were greyed out. Bad luck.</div>
<div>
<br /></div>
<div>
So I went the command-line way.</div>
<div>
<br /></div>
<div>
Now some theory: Synology Hybrid Raid (SHR) actually uses LVM (Linux logical volume management) on top of RAID1 (managed my mdadm). Unfortunately, this additional layer of LVM between RAID and the filesystem has complicated things for me.</div>
<div>
<br /></div>
<div>
So, in order to extend my volume, I need to:</div>
<div>
<ol>
<li>Resize physical partition on the drive to fill all available space (eg, /dev/sda3)</li>
<li>Resize RAID1 array on top of /dev/sda3 (eg, /dev/md2)</li>
<li>Resize LVM physical volume on top of /dev/md2</li>
<li>Resize LVM volume group, built from the physical volume (eg, /dev/vg1000)</li>
<li>Resize LVM logical volume, spanning over the /dev/vg1000 (eg, /dev/vg1000/lv)</li>
<li>Resize ext4 filesystem on top of /dev/vg1000/lv</li>
</ol>
<div>
<br /></div>
<div>
Quite a number of steps. Here is how I did it:</div>
</div>
<div>
<br /></div>
<div>
1. Resizing of physical partitions is possible with parted. Note that fdisk cannot handle such big drives as 4Tb. However, newer parted has the resize command removed (bastards), so you actually need to delete and recreate the partition in its place. Scary, but works.</div>
<div>
<br /></div>
<div>
For that, start <b>parted /dev/sda</b>, then issue commands</div>
<div>
<ul>
<li><b>unit s</b> - this makes parted use units of sectors instead of Mb/Gb/etc - this is cruicial to be exact in recreating your new partition</li>
<li><b>print free </b>- will list the current partition table along with the free space in the end</li>
<li><b>rm 3</b> - will delete 3rd parition (check if it is the correct number, mine had 5 for some odd reason)</li>
<li><b>mkpart ext4</b> - will create a new partition in its place, make sure to specify the same start sector as was printed, and the last sector of the free space, so you will use the whole disk. If it complains about alignment, press ignore - it will still be minimally aligned.</li>
<li>Now reboot - I didn't find a working method of forcing Synology kernel to reread the partition table. Even though my data partition was /dev/sda5 and after recreation became /dev/sda3, Linux RAID was still able to detect it (probably using UUID) after reboot and assemble RAID array correctly.</li>
</ul>
<div>
<br /></div>
<div>
2. After reboot you should see lots of space in /proc/partitions, but <b>mdadm -D /dev/md2</b> will say you still use only a fraction of it.</div>
</div>
<div>
<br /></div>
<div>
Run <b>mdadm --grow /dev/md2 --size max</b> - this should do the trick. But it didn't work for me. In fact, that was the trickiest part to figure out. If it actually increased the size of your RAID array, proceed to step 3, but if it kept the old size, read on.</div>
<div>
<br /></div>
<div>
Now you need to reassemble the RAID array, asking it to update the device size. Unfortunately, you need to unmount the filesystem and disable LVM for it to work.</div>
<div>
<ul>
<li>Stop all Synology apps in the GUI, like Download manager and others.</li>
<li>Stop other services accessing the filesystem (/volume1), I had to stop these:<br />/usr/syno/etc/rc.d/S20pgsql.sh stop<br />/usr/syno/etc/rc.d/S78iscsitrg.sh stop<br />/usr/syno/etc/rc.d/S81atalk.sh stop<br />/usr/syno/etc/rc.d/S83nfsd.sh stop<br />/usr/syno/etc/rc.d/S84rsyncd.sh stop<br />/usr/syno/etc/rc.d/S85synonetbkpd.sh stop<br />/usr/syno/etc/rc.d/S88synomkflvd.sh stop<br />/usr/syno/etc/rc.d/S66S2S.sh stop<br />/usr/syno/etc/rc.d/S66fileindexd.sh stop<br />/usr/syno/etc/rc.d/S66synoindexd.sh stop<br />/usr/syno/etc/rc.d/S77synomkthumbd.sh stop<br />/usr/syno/etc/rc.d/S80samba.sh stop</li>
<li><b>umount /volume1</b></li>
<li><b>umount -f /volume1/@optware </b>- this was tricker, thus -f. After that you will probably loose your SSH session. Go to the web GUI and enable SSH again in Control Panel/Terminal.</li>
<li><b>umount /volume1/@optware</b> - after reconnecting, you need to do this once more</li>
<li><b>vgchange -a n </b>- this will disable your LVM volume group, disabling also the logical volume. This will work only if the both /volume1 and /volume1/@optware are unmounted.</li>
<li><b>mdadm -S /dev/md2</b> - only now you can stop the RAID array</li>
<li><b>mdadm -A /dev/md2 -U devicesize /dev/sda3</b> - this will reassemble the RAID array, updating the device size</li>
<li><b>mdadm --grow /dev/md2 -z max </b>- finally, growing of the array will work!</li>
</ul>
<div>
<br /></div>
<div>
3. Resizing of LVM physical volume is then easy.</div>
</div>
<div>
<ul>
<li><b>vgdisplay -v </b>- will show you everything, use it to check your resizing steps.</li>
<li><b>pvresize /dev/md2</b> - will use all available space on /dev/md2</li>
</ul>
<div>
<br /></div>
<div>
4. Check vgdisplay - at this point it will show how much free space you have in the LVM volume group.</div>
<div>
<br /></div>
<div>
5. The free space in the VG should now be allocated to the locagical volume (LV). I didn't find a way of telling it to use all available space, so I did it in steps:</div>
</div>
<div>
<ul>
<li><b>vgdisplay -v</b> - will show you how much free space you have in your PV</li>
<li><b>lvextend -L+100G /dev/vg1000/lv </b>- will extend the LV by 100 Gb. You can specify the exact amount of free space reported by vgdisplay here, or just do it several times until no free space is available.</li>
</ul>
<div>
<br /></div>
<div>
6. Congratulations! Now we have a bigger underlying storage, but we still need to resize the ext4 filesystem on top of LVM.</div>
</div>
<div>
<ul>
<li><b>e2fsck -f /dev/vg1000/lv</b> - this is the required step before doing the actual resize.</li>
<li><b>resize2fs /dev/vg1000/lv </b>- this will take a long time again</li>
</ul>
<div>
<br /></div>
<div>
But once the resize is completed you can reboot the NAS and enjoy your newly available free space!</div>
</div>
<div>
<br /></div>
<div>
Too bad Synology's own tools could not do this, so I had to spend many hours researching the setup and doing it manually. Hopefully this blog post will help someone to save time.</div>
<div>
<br /></div>
<div>
Good luck!</div>
Anonymoushttp://www.blogger.com/profile/10287795376177313666noreply@blogger.com0tag:blogger.com,1999:blog-6458362026107583183.post-85260597515997865552012-02-18T20:51:00.000+02:002012-02-18T21:23:52.544+02:00How I fixed my new YN565EX flashYesterday I have received my brand new <a href="http://www.hkyongnuo.com/e-detail.php?ID=288">YN565EX flash</a> by Yongnuo from Hong Kong. It took almost a month for it to get to Estonia, but shipping was free.<br />
<br />
The flash is very similar in specs to a much more expensive Canon 580EX II, with minor exceptions, e.g. no Hi-Speed sync or ETTL Master mode. However, the flash supports ETTL Slave mode of both Canon and Nikon systems, so this was the primary reason for buying it - use as a powerful (guide 58) slave flash unit in automatic TTL mode. As a bonus it can also be a slave in manual mode as well - overall a better option than buying a 430EX from Canon, which is even slightly more expensive, for the same purpose. The reviews on the net were very good and I was impressed with the build quality, which matches flashes made by Canon.<br />
<br />
<b>But all would be cool if it worked...</b> There is <a href="http://strobist.blogspot.com/2011/04/what-china-doesnt-understand.html">much scepticism</a> for buying of Chinese flashes, and there seem to be a reason for that. The flash powered on for the first time and then quickly shut down. It completely ignored any presses of the On button afterwards and soon I noticed that the batteries get very hot while they are inserted into the flash unit. Bad, bad, bad. My option would be to pay for return shipping and wait for 2 more months until the new unit is delivered. Not a sexy option.<br />
<br />
So I decided to give it a try to fix it myself. Hot batteries suggested there should be a short circuit somewhere and if it is not a major component, it should be fixable.<br />
<br />
The flash is very similar in design to Canon 580EX, so searching for <a href="http://www.strappe.com/pics/manuals/photography/Speedlite%20580EX.pdf">its schematics</a> helped to understand how to disassemble it.<br />
<div>
<br /></div>
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiTnV4Bk5wPOUMUEV8ueOtu8tAHEEuu-eImvtMSWeAw1S1BbAeEJvc8fpdtT5dSZk6aOXh2frPYDo2beNDplUXzCzy2T1Uu7TygO5UtUOu-xYqpT2Ljy1J3ySBAeN09U747W9XiGy1bpx7o/s1600/warranty-sticker.jpg" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" height="200" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiTnV4Bk5wPOUMUEV8ueOtu8tAHEEuu-eImvtMSWeAw1S1BbAeEJvc8fpdtT5dSZk6aOXh2frPYDo2beNDplUXzCzy2T1Uu7TygO5UtUOu-xYqpT2Ljy1J3ySBAeN09U747W9XiGy1bpx7o/s200/warranty-sticker.jpg" width="149" /></a>1. Unscrew the 4 screws on the bottom of the flash unit to remove hotshoe connectors (contacts can be easily disconnected once the bottom panel is open)<br />
<br />
<div>
2. Turn the flash head 90 degrees to reveal 2 more screws at the upper part of the body, one is under the sticker, which looks like a 'warranty void' one, but as there are no such words on it - it felt safe to remove it. I suggest making pictures of the screws to be able to assemble everything later - they are all different.</div>
<div>
<br /></div>
<div>
3. Now was the most difficult part with which I struggled for some time. By now you should be able to remove the front cover of the body, but it feels like there is still something holding it at the top - actually there are no more screws, so just try again and again to gently pull it off. No sliding down, etc is necessary, just pull the front panel off the back panel, the position of the head is not important at this time. After you are successful, disconnect the two connectors and put the front cover away.<br />
<ol>
</ol>
<div>
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhgaCNkfBe5iPzK9RI91ZCV5SoZUIHBvcl_vzvNCFt2I1mU4d9Nc775ehTsxaFRkhCAgGTTdNo8yGnqult5ZWhKC8w5i0JMB4-On7rcT2ZXXwsVTOhEr9nvLiuJanLPbJvnIJKPt6JxPSsa/s1600/front-cover-open.jpg" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" height="239" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhgaCNkfBe5iPzK9RI91ZCV5SoZUIHBvcl_vzvNCFt2I1mU4d9Nc775ehTsxaFRkhCAgGTTdNo8yGnqult5ZWhKC8w5i0JMB4-On7rcT2ZXXwsVTOhEr9nvLiuJanLPbJvnIJKPt6JxPSsa/s320/front-cover-open.jpg" width="320" /></a>At this time I put the batteries back inside several times and used a voltmeter to understand where the voltage drop happens. After some tries I was lucky to see some smoke. This might be scary, but I quickly removed the PCB at the bottom of the flash on which the coil and 1000uF capacitor are mounted on the picture. <b>Be careful with touching the parts - you may get an electric shock</b> even if batteries are removed, this is due to a 300V giant capacitor which is hiding at the base of the flash head.</div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiea5xqqUo8DotMBoLMf4eFsf327qPOgcYT4nzBNABwsmpiYTLMo0hcpWQs3v78yqv7D0k96eSXfQEPbXkneZzLScobuY3DKk-AHtdNCASZW09B1Wm24pSSkvtoMOUyjz2cI_6UE7gCMv2c/s1600/problem-spot.jpg" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" height="320" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiea5xqqUo8DotMBoLMf4eFsf327qPOgcYT4nzBNABwsmpiYTLMo0hcpWQs3v78yqv7D0k96eSXfQEPbXkneZzLScobuY3DKk-AHtdNCASZW09B1Wm24pSSkvtoMOUyjz2cI_6UE7gCMv2c/s320/problem-spot.jpg" width="239" /></a></div>
<div>
The cause of the smoke most probably was residual soldering agent on the back of the board - see the picture to the right. It was melted and probably was the reason for the short circuit. I used a small screwdriver to clean it off between the two large contacts. Note that checking for it with a multimeter is difficult because there are capacitors mounted here, so probing for resistance can still show some value until the capacitors are charged (and there is a big one there!). <b>After inserting the batteries the flash turned on! </b>The problem was identified, so I have spent some more time cleaning the contacts by gently scratching and wiping off with alcohol.</div>
<br />
Now the flash works perfectly both in the hotshoe and as a slave. There are no problems with balance of light with my Canon 580EX II - at least I haven't noticed any difference so far. The only minor problem that I have noticed is the modelling light (when depth-of-field button is pressed) gets darker by the end of the 1 sec interval it is supposed to be visible, but it may depend on how well the batteries are charged.<br />
<br />
Overall, I am pretty happy now. Buy yourself one if you have basic disassembling/electronics skills to protect yourself from minor manufacturing defects. I really hope the QC will improve at Yongnuo.<br />
<br /></div>Anonymoushttp://www.blogger.com/profile/10287795376177313666noreply@blogger.com212tag:blogger.com,1999:blog-6458362026107583183.post-20457676910221308962012-02-01T23:25:00.000+02:002012-02-01T23:34:23.924+02:00Beautiful front-end for Picasa web galleries<div class="separator" style="clear: both; text-align: left;">
Ever wanted to make your Picasaweb gallery look better?</div>
<br />
Although I don't use Picasa desktop application, I upload many of <a href="http://photos.azib.net/">my photos</a> to Picasaweb. Partly, this is for historical reasons, partly I like the usage of the existing Google account and very cheap storage (20 Gb for only $5 per year - it's hard to beat that).<br />
<br />
However, I am really really not happy about how the Google's gallery looks like - both the original Picasaweb interface, which has the distracting white background as well as the newer Google Plus-integrated interface, which doesn't display my album maps and hides my photo captions too quickly. And the URLs have recently become uglier - Google no longer displays your username in the URL, but has replaced it with a bunch of numbers. Oh well...<br />
<br />
So, because there wasn't any better solution, I decided to quickly write a new, more beautiful front-end for Picasaweb during the days off during the last Christmas. I still like the storage space and the direct uploads to Picasaweb from most open-source and commercial photo management software, so there is no need to reinvent the back-end part.<br />
<br />
Here is what I came up with:<br />
<br />
<a href="http://photos.azib.net/"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhqlbXdo2atLq5wJEEC6yiJw_zeR9ILZwP_27zvK-jIQoS4MoBgkBteLl_yKKWs7Y06XfsGfAt-2xzxbNX0jxDKuYXEnNKqcy1mG0byWUKa1Rhej25cjhRZ8CzobK9HoLVghctmT-js2OuG/s640/photos-screen.jpg" style="width: 100%;" /></a>
<div style="text-align: center;"><a href="http://photos.azib.net/">Anton Keks photos</a> rendered by <a href="https://github.com/angryziber/picasa-gallery">picasa-gallery</a></div>
<br />
As Google provides a Java API for accessing to its services, I decided to write a simple Java web app and host it at Google App Engine (for free!).<br />
<br />
The web app fetches all the data from Picasaweb, but displays the albums and photos completely in its own interface, which is (surprisingly) quicker than the Google's own and looks more beautiful, with minimal distractions from the photos themselves. See <a href="http://photos.azib.net/">my example photo gallery</a> to check out the transition effects, full screen photo viewer with copy-pasteable URLs, cross-fading between photos, and a search within your photo metadata (captions, labels, etc). It is optimized for mobile browsers as well.<br />
<br />
<a href="https://github.com/angryziber/picasa-gallery">The source code is available on Github</a> and is licensed under GPL. You can just install it to your own <a href="http://code.google.com/appengine/">Google App Engine</a> account for free (see the instructions on the github) or use the link at the bottom of <a href="http://photos.azib.net/">my photo gallery</a> to display your own photos - the link will be permanent, so you can even send it to your friends!<br />
<br />Anonymoushttp://www.blogger.com/profile/10287795376177313666noreply@blogger.com9tag:blogger.com,1999:blog-6458362026107583183.post-41817167313429947742012-01-24T01:15:00.003+02:002012-01-24T01:18:17.983+02:00How I resurrected my TomTom GO 920This stuff probably applies to any hardware TomTom device, including GO 520, 720, 920, ONE, XL, etc<br />
<br />
I have owned my TomTom GO 920 since 2007. It worked pretty well, but during the last trip to Spain started failing.<br />
<br />
Symptoms included:<br />
<ul>
<li>Doesn't turn on after put to sleep</li>
<li>Reset button doesn't work (that hole at the bottom of the device)</li>
<li>Hangs during operation, reset turns the screen off, can't turn on anymore</li>
</ul>
<div>
<br />
I thought my TomTom was bricked for good - anyway this seems like a hardware problem to me and no solutions on the net helped.</div>
<div>
<br /></div>
<div>
However, after the device was just gathering dust for some months, I have decided to give it another try: I plugged it into the USB and the device turned on again! But only until it hanged again and reset stopped working again.</div>
<div>
<br /></div>
<div>
So, here are the steps I used to resurrect it:</div>
<div>
<ol>
<li>Leave the device laying around without the power source until the battery completely drains</li>
<li>Plug it into USB, turn it on by holding the power button for a few seconds (it should turn on by now if battery indeed was empty - I have tested it several times)</li>
<li>If it asks whether to connect to the computer - choose YES</li>
<li>Start TomTom HOME (yes, that stupid Windoze-only program - I used VirtualBox for running it - doesn't work under Wine)</li>
<li>Go to manage my TomTom</li>
<li>Check The Application (the TomTom GUI interface) and choose to delete it from the device</li>
<li>Close TomTom HOME, restart it</li>
<li>HOME will detect you don't have the Application installed and will offer you to do so</li>
<li>Let it reinstall the Application</li>
<li>Delete and copy the map as well, either with TomTom HOME or just manually</li>
</ol>
<div>
<br />
That's it! TomTom will now turn off and turn on again. However, occasionally it still needs the reset button to be pushed after put into sleep - but now the reset button works at least.</div>
</div>
<div>
<br /></div>
<div>
I guess the problem was with data corruption: when you turn on or reset the device, the bootloader takes control and tries to run the Application, but it mysteriously corrupted and fails. If the above steps don't help, you can try updating the bootloader as well - there are instructions on the net on how to do it, but you still need the working USB mass storage, so let the battery to drain first.</div>
<div>
<br /></div>
<div>
Hopefully this solution is not too temporary, but it works for 3 days now already!</div>Anonymoushttp://www.blogger.com/profile/10287795376177313666noreply@blogger.com86tag:blogger.com,1999:blog-6458362026107583183.post-39032672506784903282012-01-12T01:02:00.002+02:002012-01-12T09:21:58.396+02:00Java course IAG0040 in Tallinn Technical UniversityNow is the time to make an official announcement: I am no longer teaching Java to Master's students in Tallinn Technical University (TTÜ).<br />
<br />
I have really enjoyed doing it during the past 6 years, and I definitely learned a lot during this time.<br />
Teaching in a university is not something you would do for money and it also eats up a lot of your spare time - I did it because I know I am good at it and I wanted to share my knowledge and experience in the field, especially considering the fact that most university professors/lecturers have little understanding of how software development really works and how to teach it. I did it for those students each year who really had their eyes shining. Because of them, it was worth it.<br />
<br />
However, Java is no longer the coolest programming language out there, it slowly dies due to lack of or very slow development. Its features are outdated, it is not as productive as I would like it to be. It's time to move on. Probably Java still is one of the best programming languages to teach in the universities, but students should understand that world software development field is evolving quickly and they need to keep an eye of what is happening in the industry.<br />
<br />
I am still a big fan of JVM - it is well tuned for performance and it is cross-platform. .NET/C# is not nearly an option. I really like <a href="http://www.scala-lang.org/">Scala</a>, <a href="http://clojure.org/">Clojure</a> seems very interesting, as well as <a href="http://en.wikipedia.org/wiki/Vala_(programming_language)">Vala</a>, <a href="http://golang.org/">Go</a> and <a href="http://www.dartlang.org/">Dart</a>. <a href="http://confluence.jetbrains.net/display/Kotlin/Welcome">Kotlin</a> seems very promising as well. Sooner or later there will be another popular language on JVM that most Java developers will be able to shift to. Hopefully there won't be too many of such languages and hopefully they will be <a href="http://en.wikipedia.org/wiki/Type_system">statically typed</a>. I recommend every Java developer to check the <a href="http://www.playframework.org/">Play framework</a> for a fresh look at Java. Anyway, the current trend in software development is to merge academic computer science and industrial programming together again - you should pay a lot more attention to <a href="http://en.wikipedia.org/wiki/Functional_programming">functional programming</a> in addition to the good old object oriented programming (OOP) now. The concept of <a href="http://en.wikipedia.org/wiki/DevOps">DevOps</a> is also becoming more and more important, meaning that soon you will not be able to survive as a developer, if you don't know how to create a software product from scratch until the end, deploying it yourself to the production system.<br />
<br />
However, I am not going to leave the teaching 'business'. I will still give talks at conferences, organize trainings, and hopefully contribute to making IT education better with Codeborne Academia, but more on that later. I am still open to offers from universities and colleges as well, feel free to contact me.<br />
<br />
<span style="font-size: large;"><b>Now the important part</b></span><br />
<br />
All the code written during the course by me and the students during the past 6 years is now available on the Github:<br />
<a href="https://github.com/angryziber/java-course"><span style="font-size: large;">https://github.com/angryziber/java-course</span></a><br />
<br />
The lastest (2011) lecture slides are available on Slideshare:<br />
<a href="http://www.slideshare.net/antonkeks/presentations"><span style="font-size: large;">http://www.slideshare.net/antonkeks/presentations</span></a><br />
<br />
Or go to specific lectures using the links below:<br />
<ol><li><a href="http://www.slideshare.net/antonkeks/1-introduction-10973556">Introduction</a></li>
<li><a href="http://www.slideshare.net/antonkeks/2-basics">Java basics, program flow</a></li>
<li><a href="http://www.slideshare.net/antonkeks/3-objects">OOP in Java</a></li>
<li><a href="http://www.slideshare.net/antonkeks/4-collections">Exceptions and Collections</a></li>
<li><a href="http://www.slideshare.net/antonkeks/5-generics">Generics, Enums, Assertions</a></li>
<li><a href="http://www.slideshare.net/antonkeks/6-agile">Unit testing and Agile software development</a></li>
<li><a href="http://www.slideshare.net/antonkeks/7-text">Text processing, Charsets & Encodings</a></li>
<li><a href="http://www.slideshare.net/antonkeks/8-io">I/O, Files, Streams</a></li>
<li><a href="http://www.slideshare.net/antonkeks/9-networking">Networking, Reflection</a></li>
<li><a href="http://www.slideshare.net/antonkeks/10-threads">Threads and Concurrency</a></li>
<li><a href="http://www.slideshare.net/antonkeks/11-patterns">Design Patterns</a></li>
<li><a href="http://www.slideshare.net/antonkeks/12-xml">Web, Servlets, XML</a></li>
<li><a href="http://www.slideshare.net/antonkeks/java-course-13-jdbc-db-access">JDBC, Logging</a></li>
<li><a href="http://www.slideshare.net/antonkeks/java-course-14-beans-applets-gui">Java Beans, Applets, GUI</a></li>
<li><a href="http://www.slideshare.net/antonkeks/15-advanced">Advacned: Ant, Scripting, Spring & Hibernate</a></li>
</ol>Anonymoushttp://www.blogger.com/profile/10287795376177313666noreply@blogger.com18tag:blogger.com,1999:blog-6458362026107583183.post-81488821835157808442012-01-04T02:52:00.003+02:002012-01-04T03:06:10.992+02:00jQuery filters out script elementsSometimes you want to fetch new HTML content with AJAX. You then parse incoming html and insert it into the document:<br />
<br />
<pre>$.get('/some-content', function(html) {
$(html).find('#content').appendTo('body');
});
</pre><br />
or just use the load() function as a short-cut:<br />
<br />
<pre>$('body').load('/some-content #content');
</pre><br />
The problem is, this doesn't execute any embedded script tags. <br />
Actually, doing this:<br />
<br />
<pre>$('<div><script>alert(1)</script></div>')</pre><br />
you will get a jQuery set containing two elements, not one DOMElement as most people would expect!<br />
The set will contain the original div as a DOMElement (with all its content except the script) and a script seprately, also as a DOMElement. If there were more scripts in the original unparsed string, you would get all script elements separately in the parsed set.<br />
<br />
The workaround would be to execute all script elements manually if you do any DOM manipulation with the incoming HTML:<br />
<br />
<pre>$.get('/some-content', function(html) {
$(html).find('#content').appendTo('body');
$(html).filter('script').appendTo('body');
});
</pre><div><br />
</div><div>Sad, but true...<br />
<br />
Some more background info from comments on jQuery site:<br />
<blockquote class="tr_bq"><em>All of jQuery's insertion methods use a domManip function internally to clean/process elements before and after they are inserted into the DOM. One of the things the domManip function does is pull out any script elements about to be inserted and run them through an "evalScript routine" rather than inject them with the rest of the DOM fragment. It inserts the scripts separately, evaluates them, and then removes them from the DOM.</em></blockquote></div><div><br />
</div>Anonymoushttp://www.blogger.com/profile/10287795376177313666noreply@blogger.com1tag:blogger.com,1999:blog-6458362026107583183.post-44915236979985420342011-12-07T01:28:00.003+02:002011-12-07T01:32:35.219+02:00How to use XFS on a Synology NAS deviceAfter returning from a trip to Israel and Jordan last month, I have discovered that my home server's motherboard is dead. After considering different options, I have realized that I really only need the server to host my RAID1 hard disks, where I store all my photos and other content.<br />
<br />
So that's how I got my <b>Synology DS212J NAS</b>, which is an Linux-powered device with low level access in case it is needed. Now the problem was, how to get my 2 HDDs running in that box with preserving all the data.<br />
<br />
Unfortunately, the first install of the box requires wiping of the data on the HDD. Fortunately, I have RAID1 setup with two exact copies of all the data, so I decided to sacrifice one of the disks to install the firmware and later insert the second one and copy the data over.<br />
<br />
Now, the problem was that my data was on a <b>XFS</b> partition, which DS212J doesn't support by default - it uses <b>EXT4</b> for its partitions.<br />
<br />
Fortunately, Synology is kind enough to provide an SDK which makes it possible to build missing kernel modules and enable more features on the device.<br />
<br />
Here is how to do it:<br />
<ol><li>Enable terminal access in the web interface of the device - then you can login to the box using <b>ssh root@diskstation.local </b>or whatever name/IP you have given it</li>
<li>Go to <a href="http://sourceforge.net/projects/dsgpl/">http://sourceforge.net/projects/dsgpl/</a> - click Files</li>
<li>Download the <b>DSM</b> <b>tool chains</b> (gcc ARM cross compiler) for your firmware (currently <b>3.2</b>) - for DS212 it is in the <b>Marvell 88F628x</b>, if you have another model use either /proc/cpuinfo or dmesg to find out your CPU type</li>
<li>Download the <b>Synology NAS GPL Source </b>also from sourceforge - this is a huge archive that includes source code of all the kernels that Synology uses (check your kernel version with <b>uname -a</b>, mine uses 2.6.32)</li>
<li>Extract both archives to <b>/usr/local </b>(or at least link there)<b> </b>- this is important, because kernel Makefile already points to a cross compiler from the tool chain that should be located there</li>
<li><b>cd /usr/local/source/linux-2.6.32 </b>(from the GPL source archive)</li>
<li>Copy the correct kernel config for your device - Syology kernel configs are located in <b>synoconfigs</b>, so for DS212J do this: <b>cp synoconfigs/88f6281 .config</b></li>
<li><b>make menuconfig </b>to use kernel's menu-based configuration utility (ensure that you have libncurses installed for it to work)</li>
<li>In the menu, locate the needed additional drivers (File Systems -> XFS in my case), and press <b>M </b>to enable module compilation</li>
<li>Exit and <b>make modules</b></li>
<li>This will compile the required .ko files. XFS driver was in <b>fs/xfs/xfs.ko </b>after compilation was completed, but <b>modinfo </b><b>fs/xfs/xfs.ko </b>also told me that xfs.ko depends on the <b>exportfs.ko</b> as well.</li>
<li>I needed to copy both modules to the device, <b>xfs.ko</b> and <b>fs/exportfs/exportfs.ko</b> to make it work, otherwise kernel refused to load the xfs module. To copy the files, use either FTP (enable from the web interface) or the web interface itself. It doesn't matter where you put them.</li>
<li>Login to the device with <b>ssh root@diskstation.local</b> again (note, you need to be root, not admin)</li>
<li>Go to the directory where you uploaded the .ko files (<b>cd /volume1/public</b> in my case)</li>
<li>Load the modules: <b>insmod exportfs.ko</b>, then <b>insmod xfs.ko </b>- if it doesn't complain, then you have XFS support enabled (or whatever other driver you needed to load)</li>
<li>Then create a directory anywhere and mount your second HDD to it, then copy the files with <b>cp</b> or <b>rsync</b>. Example: <b>mount -t xfs /dev/hda storage </b>- look which name did you HDD receive. Mine was hda, because I have installed the firmare to hdb before. Check <b>mount</b> without the arguments on where you firmware is located. Also, my disk didn't have a partition table, only a single partition starting in MBR, that's why I used /dev/hda there and not /dev/hda1 or something. Use <b>parted /dev/hda</b> for more info which partitions you have.</li>
<li>Rsync is a great way to copy the files then, eg <b>rsync -arv storage/ /volume1/</b> - this will preserve all file attributes</li>
<li>When copying is complete, add the second HDD to the Synology RAID using the web interface</li>
</ol><div><br />
Note: fortunately I didn't have to replace the stock kernel fully on the device. It was enough to load these two new modules. However at first, when loading of only <b>xfs.ko </b>failed due to missing symbols, I already thought that I need to do that, and I have even tried with no help (before discovering that I actually need to load exportfs.ko). </div><div><br />
</div><div>FYI: Synology's kernel is located in the flash memory, not on the HDD as the rest of the system. The devices with flash partitions are <b>/dev/mtd*</b>, with /dev/mtd0 being the boot loader (don't touch this one - it provides the ability to install the rest of the firmare with Synology Assistant over the network) and <b>/dev/mtd1</b> being the <b>uImage</b> of the kernel.</div><div><br />
</div><div>If you still need to replace the kernel, you may try to <b>make uImage </b>of the kernel (make sure you have <b>uboot </b>installed for this to work - this is what Synology uses), copy tyhe <b>uImage </b>file to the device and then <b>cat uImage > /dev/mtd1 </b>- but do it at your own risk, I am not sure whether will Synology Assistant work if you flash the wrong kernel and reboot. I guess it should, but I haven't tested it :-)</div><br />
Hopefully it will be useful for someone - the same way you can add support for Apple/Mac HFS filesystem, ReiserFS or others.Anonymoushttp://www.blogger.com/profile/10287795376177313666noreply@blogger.com1tag:blogger.com,1999:blog-6458362026107583183.post-14848166714180426772011-11-27T17:04:00.002+02:002011-11-27T17:14:53.279+02:00Reusing Shotwell thumbnails in NautilusAs I have lots of photos on my machine, thumbnails start to consume considerable amount of space on the disk.<br />
<br />
Another problem, is that gnome-raw-thumbnailer isn't enabled in Ubuntu (Natty, Oneiric) by default anymore, so my raw photos don't get thumbnailed in Nautilus. And, if I enable it manually, thumbnails of vertical photos don't show with the correct orientation.<br />
<br />
So, I have researched a bit the freedesktop thumbnail spec, <a href="https://live.gnome.org/ThumbnailerSpec">gnome thumbnailer spec</a> and how Shotwell stores its thumbnails and came up with a shell script that reuses Shotwell thumbnails for Nautilus.<br />
<br />
Save the script below as <b>/usr/bin/shotwell-raw-thumbnailer</b><br />
<pre class="brush: java">#!/bin/bash
input=$1
output=$2
if [ -z $output ]; then
echo "Usage: $0 input output"
exit 1
fi
file=`echo -n ${input##file://} | perl -pe 's/%([0-9a-f]{2})/sprintf("%s", pack("H2",$1))/eig'`
md5=`echo -n $input | md5sum | awk '{print $1}'`
shotwell_id=`sqlite3 ~/.shotwell/data/photo.db "select id from PhotoTable where filename = '$file'"`
if [ -z $shotwell_id ]; then
gnome-raw-thumbnailer $input $output
exit
fi
thumb=`printf ~/.shotwell/thumbs/thumbs128/thumb%016x.jpg $shotwell_id`
if [ \! -e $thumb ]; then
gnome-raw-thumbnailer $input $output
exit
fi
replaceWithLink() {
sleep 1
ln -sf $thumb ~/.thumbnails/normal/$md5.png
}
# gnome-thumbnail-factory doesn't support links
cp $thumb $output
# however, linked thumbnails work, so replace them after a delay
replaceWithLink &
</pre><br />
In order to make it work, you then need to register it as a thumbnailer in Gnome, put this to <b>/usr/share/thumbnailers/shotwell.thumbnailer</b><br />
<pre class="brush: java">[Thumbnailer Entry]
Exec=/usr/bin/shotwell-raw-thumbnailer %u %o
MimeType=image/x-3fr;image/x-adobe-dng;image/x-arw;image/x-bay;image/x-canon-cr2;image/x-canon-crw;image/x-cap;image/x-cr2;image/x-crw;image/x-dcr;image/x-dcraw;image/x-dcs;image/x-dng;image/x-drf;image/x-eip;image/x-erf;image/x-fff;image/x-fuji-raf;image/x-iiq;image/x-k25;image/x-kdc;image/x-mef;image/x-minolta-mrw;image/x-mos;image/x-mrw;image/x-nef;image/x-nikon-nef;image/x-nrw;image/x-olympus-orf;image/x-orf;image/x-panasonic-raw;image/x-pef;image/x-pentax-pef;image/x-ptx;image/x-pxn;image/x-r3d;image/x-raf;image/x-raw;image/x-rw2;image/x-rwl;image/x-rwz;image/x-sigma-x3f;image/x-sony-arw;image/x-sony-sr2;image/x-sony-srf;image/x-sr2;image/x-srf;image/x-x3f;
</pre><br />
So, what does this script do?<br />
<ul><li>When Gnome (or Nautilus) needs a thumbnail, it runs this script</li>
<li>The script checks if the image has an entry in the Shotwell database (~/.shotwell/data/photo.db)</li>
<li>Then it checks if Shotwell has a thumbnail for it (in ~/.shotwell/thumbs)</li>
<li>If yes, the script returns the already generated thumbnail to Gnome - no generation needed, so it works much faster</li>
<li>If Shotwell doesn't have the thumbnail, the call is delegated to gnome-raw-thumbnailer that generates a new thumbnail, the old-fashioned way</li>
<li>If Shotwell's thumbnail was used, the script will asynchronously replace the thumbnail in ~/.thumbnails with the link to Shotwell's file, avoiding a copy on the disk</li>
</ul><div><br />
The last step is the one that saves disk space. Unfortunately, it is not possible to return a link right away to Gnome - it can't read it for some reason. However, by putting a link directly under ~/.thumbnails later works perfectly, even if we put a .jpg file under the name of .png (as required by a spec). Png is actually a worse choice for thumbnailing of photos due to its lossless compression, so the disk savings are more than twofold with this script.</div><div><br />
</div><div>The next step would be to rewrite this in C or Vala to make even faster and maybe even make Shotwell create these links right away when it generates the thumbnails.</div>Anonymoushttp://www.blogger.com/profile/10287795376177313666noreply@blogger.com4tag:blogger.com,1999:blog-6458362026107583183.post-22280194218105137332010-10-19T01:43:00.001+03:002010-10-19T01:44:26.968+03:00Simple Rsync GUI: easy backups from NautilusMost often, making backups of your important files is a manual process. Especially if you are dealing with large collections of photos.<br />
<br />
In the meantime I have written a small and convenient Nautilus script (for Gnome users) for doing exactly that.<br />
<br />
Features:<br />
<ul><li>Syncs to any mounted location or over SSH (everything that rsync supports)</li>
<li>Remembers previously used locations</li>
<li>Preview of changes (any deletions are shown first, but performed the last)</li>
<li>Nice progress bar with upload speed display</li>
</ul><div><br />
</div><div>Everything is written as a simple bash script using Zenity for GTK GUI - just drop it to <span class="Apple-style-span" style="font-family: 'Courier New', Courier, monospace;">~/.gnome2/nautilus-scripts</span> directory, and it appear in <b>Nautilus right-click menu</b>, under Scripts.</div><div class="separator" style="clear: both; text-align: -webkit-auto;"><br />
</div><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgp0791wLYnukxr8j-0oG20ELs_qdWDAjDnsQ8tMjqcKyfRcyeTyI7Pk9ZySFBH0_HVHnR4RoJL0uEmxhqBJJqyoyoLPMccmIWsRm3ZlrcLEtKbHBJPuW_Jj-j7Ymyap2z1zz3IjQ4j842q/s1600/Sync2.png" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><br />
</a><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiowheY1ITNX8k_7Gb2tB2lNHyo2YpG_Ex2YhGA5xqdZgeJqbmna-WCtFOJwCFsO5YPfi0um8AIXT5uSWsuwbd_7xgGhGG7JYzf0aUuL22as8-SNjNx6ntoeAvR6gRzKykmEoSKIeRC2Ikl/s1600/Sync.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="204" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiowheY1ITNX8k_7Gb2tB2lNHyo2YpG_Ex2YhGA5xqdZgeJqbmna-WCtFOJwCFsO5YPfi0um8AIXT5uSWsuwbd_7xgGhGG7JYzf0aUuL22as8-SNjNx6ntoeAvR6gRzKykmEoSKIeRC2Ikl/s400/Sync.png" width="400" /></a><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgp0791wLYnukxr8j-0oG20ELs_qdWDAjDnsQ8tMjqcKyfRcyeTyI7Pk9ZySFBH0_HVHnR4RoJL0uEmxhqBJJqyoyoLPMccmIWsRm3ZlrcLEtKbHBJPuW_Jj-j7Ymyap2z1zz3IjQ4j842q/s1600/Sync2.png" imageanchor="1" style="clear: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" height="192" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgp0791wLYnukxr8j-0oG20ELs_qdWDAjDnsQ8tMjqcKyfRcyeTyI7Pk9ZySFBH0_HVHnR4RoJL0uEmxhqBJJqyoyoLPMccmIWsRm3ZlrcLEtKbHBJPuW_Jj-j7Ymyap2z1zz3IjQ4j842q/s320/Sync2.png" width="320" /></a></div><div class="separator" style="clear: both; text-align: center;"><br />
</div><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgp0791wLYnukxr8j-0oG20ELs_qdWDAjDnsQ8tMjqcKyfRcyeTyI7Pk9ZySFBH0_HVHnR4RoJL0uEmxhqBJJqyoyoLPMccmIWsRm3ZlrcLEtKbHBJPuW_Jj-j7Ymyap2z1zz3IjQ4j842q/s1600/Sync2.png" imageanchor="1" style="clear: right; margin-bottom: 1em; margin-left: 1em;"></a><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjqoL0WRN7NMvxxSXNvIdnkSefb9J4Az4Mrr3ggnG81QSopVELctooUpjtG1O6ccgpV8EifmV8bURm1jX-4R5ZfS9OObWF9N5ITzZ-UKPPFmrKGlFF2K98iZuEvO0UMNNuNbMqnpi8RMWFl/s1600/Sync3.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="203" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjqoL0WRN7NMvxxSXNvIdnkSefb9J4Az4Mrr3ggnG81QSopVELctooUpjtG1O6ccgpV8EifmV8bURm1jX-4R5ZfS9OObWF9N5ITzZ-UKPPFmrKGlFF2K98iZuEvO0UMNNuNbMqnpi8RMWFl/s320/Sync3.png" width="320" /></a></div><div class="separator" style="clear: both; text-align: center;"><br />
</div><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiDNJySkL9PxYK3UbwZ5Ws5AtF3Lw3zzwlh-Bif_aULf1HQIMV3M7vpFIOVjH2HPfChYmOpb7vPzZgKwBCmqiq0Q2SBzwFnsQVTUxN8kFd-JTU4wpQlYm6IdG2LwH9MJwdAtE6w1pVabhgd/s1600/Sync4.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="120" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiDNJySkL9PxYK3UbwZ5Ws5AtF3Lw3zzwlh-Bif_aULf1HQIMV3M7vpFIOVjH2HPfChYmOpb7vPzZgKwBCmqiq0Q2SBzwFnsQVTUxN8kFd-JTU4wpQlYm6IdG2LwH9MJwdAtE6w1pVabhgd/s400/Sync4.png" width="400" /></a></div><div class="separator" style="clear: both; text-align: center;"><br />
</div><div class="separator" style="clear: both; text-align: left;">Don't forget - this all is just a frontend for rsync (that you are too lazy to run from command-line).</div><div class="separator" style="clear: both; text-align: left;"><br />
</div><div class="separator" style="clear: both; text-align: left;"><b>Dependencies</b>: nautilus, zenity, rsync, bash</div><div class="separator" style="clear: both; text-align: left;"><br />
</div><div class="separator" style="clear: both; text-align: left;">And now, here is the source (save to ~/.gnome2/nautilus-scripts/Sync):</div><blockquote><pre>#!/bin/bash
# Nautilus script to sync specified folder to another destination via rsync.
# Put this to ~/.gnome2/nautilus-scripts
# Written by Anton Keks (BSD license)
paths_file=$(readlink -f $0).paths
locations=`cat $paths_file`
sources=`cat $paths_file | awk -F'|' '{print $1}'`
if [ "$1" ]; then
source=$1
else
# add current directory also to the list
sources=`echo -e "$sources\\n$PWD" | sort -u`
# ask user to chose one of the sources
source=`zenity --list --title="Sync source" --text="No source was specified. Please choose what do you want to sync" --column=Source "$sources" Other...` || exit 1
if [ "$source" = Other... ]; then
source=`zenity --entry --title="Sync source" --text="Please enter the source path on local computer" --entry-text="$PWD"` || exit 1
fi
fi
# normalize and remove trailing /
source=`readlink -f "$source"`
source=${source%/}
if [ ! -d "$source" ]; then
zenity --error --text="$source is not a directory"; exit 2
fi
if [ $2 ]; then
# TODO: support multiple sources
zenity --warning --text="Only one directory can be synched, using $source"
fi
# find matching destinations from stored ones
destinations=""
for s in $sources; do
if echo "$source" | fgrep $s; then
dest=`fgrep "$s" $paths_file | awk -F'|' '{print $2}'`
suffix=${source#$s}
suffix=${suffix%/*}
destinations="$destinations $dest$suffix"
fi
done
# ask user to chose one of the matching destinations of enter a new one
dest=`zenity --list --title="Sync destination" --text="Choose where to sync $source" --column=Destination $destinations New...` || exit 3
if [ $dest = New... ]; then
basename=`basename "$source"`
dest=`zenity --entry --title="Sync destination" --text="Please enter the destination (either local path or rsync's remote descriptor), omitting $basename" --entry-text="user@host:$(dirname $source)"` || exit 3
echo "$source|$dest" >> $paths_file
fi
# check if user is not trying to do something wrong with rsync
if [ `basename "$source"` = `basename "$dest"` ]; then
# sync contents of source to dest
source="$source/"
fi
log_file=/tmp/Sync.log
rsync_opts=-rltEorzh
echo -e "The following changes will be performed by rsync (see man rsync for info on itemize-changes):\\n$source -> $dest\\n" > $log_file
( echo x; rsync -ni $rsync_opts --delete "$source" "$dest" 2>&1 >> $log_file; rsync_result=$? ) | zenity --progress --pulsate --auto-close --width=350 --title="Retrieving sync information"
if [ $rsync_result -ne 0 ]; then
zenity --error --title="Sync" --text="Rsync failed: `cat $log_file`"; exit 4
fi
num_files=`cat $log_file | wc -l`
num_files=$((num_files-3))
if [ $num_files -le 0 ]; then
zenity --info --title="Sync" --text="All files are up to date on $dest"; exit
fi
zenity --text-info --title="Sync review ($num_files changes)" --filename=$log_file --width=500 --height=500 || exit 4
num_deleted=`fgrep delet $log_file | wc -l`
if [ $num_deleted -ge 100 ]; then
zenity --question --title="Sync" --text="$num_deleted files are going to be deleted from $dest, do you still want to continue?" --ok-label="Continue" || exit 4
fi
rsync_progress_awk="{
if (\$0 ~ /to-check/) {
last_speed=\$(NF-3)
}
else {
print \"#\" \$0 \" - \" files \"/\" $num_files \" - \" last_speed;
files++;
print files/$num_files*100 \"%\";
}
fflush();
}
END {
print \"#Done, \" files \" changes, \" last_speed
}"
# note: delete-delay below means that any files will be deleted only as a last step
rsync $rsync_opts --delete-delay --progress "$source" "$dest" | awk "$rsync_progress_awk" | zenity --progress --width=350 --title="Synchronizing $source" || exit 4
</pre></blockquote>Anonymoushttp://www.blogger.com/profile/10287795376177313666noreply@blogger.com28tag:blogger.com,1999:blog-6458362026107583183.post-46242045388890185162009-12-17T00:04:00.005+02:002009-12-17T00:43:06.831+02:00Deleting thumbnails for inexisting photos<a href="http://freedesktop.org/">Freedesktop</a> for some years already has a <a href="http://jens.triq.net/thumbnail-spec/">spec on how applications should manage image thumbnails</a> (use Next link there). The spec is now followed by majority of Gnome and KDE applications, including F-Spot, which is one of the very few applications that uses large 256x256 thumbnails under <span style="font-family:courier new;">~/.thumbails/large</span>.<br /><br />The spec specifies to store thumbnails in PNG format, naming the files after the MD5 sum of the original URLs of the original files, eg 81347ce6c37f75513c5e517e5b1895b8.png.<br /><br />The problem with the spec is that if you delete or move image files, thumbnails stay there and take space (for my 20000+ photos I have 1.4Gb of large thumbails).<br /><br />Fortunately, you can from time to time clean them by using simple command-line tricks, as the original URLs are stored inside of thumbnail files as <span style="font-weight: bold;">Thumb:URI</span> attributes. I don't recommend erasing all of your thumbnails, because regeneration will take time.<br /><br />In order to create a list of matching <span style="font-style: italic;">thumbnail-original URL</span> pairs, you can run the following in a terminal inside of either <span style="font-weight: bold;">.thumbnails/large</span> or <span style="font-weight: bold;">.thumbnails/normal</span> directories (it will take some time):<br /><blockquote></blockquote><blockquote></blockquote><blockquote style="font-family: courier new;">for i in `ls *.png`; do<br /> identify -verbose "$i" | \<br /> fgrep Thumb::URI | sed "s@.*Thumb::URI:@$i@" >> uris.txt;<br />done</blockquote>This will get you a uris.txt file, where each line looks like the following:<br /><blockquote>f78c63184b17981fddce24741c7ebd06.png <span style="font-style: italic;">file:///home/user/Photos/2009/IMG_5887.CR2</span></blockquote>Note that the provided thumbnail filenames (first tokens) can also be generated the following way from the URLs (second tokens) using MD5 hashes:<br /><blockquote>echo -n <span style="font-style: italic;">file:///home/user/Photos/2009/IMG_5887.CR2</span> | md5sum</blockquote>After you have your <span style="font-style: italic;">uris.txt</span> file, it can be easily processed with any familiar command-line tools, like <span style="font-weight: bold;">grep</span>, <span style="font-weight: bold;">sed</span>, <span style="font-weight: bold;">awk</span>, etc.<br /><br />For example, in order to delete all thumbnails matching '<span style="font-weight: bold;">Africa</span>', use the following:<br /><blockquote>for i in `cat uris.txt | fgrep <span style="font-weight: bold;">Africa</span>`; do rm $i >/dev/null; done</blockquote>So, as you can see, it is pretty simple to free a few hundred megabytes (depending on the number of thumbnails you are deleting).<br /><blockquote></blockquote>With this kind of trick you can even rename the thumbnails of moved files if you use md5sum to generate the new filenames from the URLs, as shown above. This will save you regeneration time.Anonymoushttp://www.blogger.com/profile/10287795376177313666noreply@blogger.com147tag:blogger.com,1999:blog-6458362026107583183.post-72234185007934014092009-08-26T22:55:00.007+03:002009-08-27T18:19:02.976+03:00Announcing F-Spot Live Web Gallery extensionI am happy to announce a new extension for <a href="http://f-spot.org/">F-Spot</a>, the popular Linux photo management application - <span style="font-weight: bold;">LiveWebGallery</span>. Once installed, invoke it from the Tools menu in F-Spot's main window.<br /><div class="proposal"><p>The extension contains a minimal web server implementation that serves the user's gallery over HTTP and can be viewed with any web browser, even on Mac and Windows. So now you are able to easily share your photos with family, friends, colleagues no matter what operating system and software they use by doing just a few mouse clicks in F-Spot. The only requirement is that they have to be on the same network, or be able to access your machine's IP address in some other way.<br /></p><p><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhIqijCbLWZE2vQJCltPlhqcePU2MRZIv4vLjs6cWE4zTuTPRH07yL84xtGMjEoD3UsEwuUNL00TciMr3QuftpPeh63W4XJvILoJKmaahotEYkdXmv57Nwhuld2wt7FbRGhIVVH8Q0Ga4ET/s1600-h/LiveWebGalleryUI.png"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer; width: 400px; height: 290px;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhIqijCbLWZE2vQJCltPlhqcePU2MRZIv4vLjs6cWE4zTuTPRH07yL84xtGMjEoD3UsEwuUNL00TciMr3QuftpPeh63W4XJvILoJKmaahotEYkdXmv57Nwhuld2wt7FbRGhIVVH8Q0Ga4ET/s400/LiveWebGalleryUI.png" alt="" id="BLOGGER_PHOTO_ID_5374370971733364498" border="0" /></a></p> <p>As you can see in the screenshot, you can choose whether to share photos with a particular tag, current view in F-Spot (allows you to create an arbitrary query) or the currently selected photos. </p><p>To activate the gallery (start the embedded lightweight web server), just click activate button in the top-right corner. On activation, the URL of the web gallery will appear, allowing you either to open it yourself or copy the link and provide to other viewers.</p><p><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh7hLpXc7rIfkBfiHc6r75MgsocCcNsslYlpPDeth5ppSj6Mc5rxqhSSIOxs6H7kll-Qw22LyUcyDp6GZySgDSb3T1R2mF4784QtLXXs3gcDfVSz-YWf6eQNX3_z3efS8O6M9yy9WYRJQjr/s1600-h/LiveWebGalleryFirefox.png"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer; width: 400px; height: 354px;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh7hLpXc7rIfkBfiHc6r75MgsocCcNsslYlpPDeth5ppSj6Mc5rxqhSSIOxs6H7kll-Qw22LyUcyDp6GZySgDSb3T1R2mF4784QtLXXs3gcDfVSz-YWf6eQNX3_z3efS8O6M9yy9WYRJQjr/s400/LiveWebGalleryFirefox.png" alt="" id="BLOGGER_PHOTO_ID_5374377361381169970" border="0" /></a></p><p>After that all the options can still be changed in the dialog and will affect all new viewers or those pressing the browser's reload button.</p><p>Most of us already know that many pictures are rarely viewed after they are made (<a href="http://www.wired.com/wired/archive/12.10/photo.html">Point, Shoot, Kiss It Goodbye</a>). F-Spot tries to fix this with its very powerful tagging features - tags make it much easier to find photos made long ago. This, however, is no magic - the possibilities of finding the right photos when needed depends on how well you tag. Now, this extension allows to make tagging even more useful, because other people can help you with the most difficult part - properly tagging can sometimes be a lot of work. With this extension, you can delegate some it to other people! The gallery is not read-only - if you choose so, an editable tag can be selected and viewers can add/remove this tag from photos (currently only in the full photo view). This is especially useful to let other people to tag themselves in your library. For security reasons, editing is disabled by default.</p><p>As time goes by, a lot more features can be added to Live Web Gallery extension, especially related to editing photo metadata (tagging, editing descriptions, flagging for deletion).</p> <p>As far as I know, being able to share your photos on the local network without any software or OS requirements is a unique feature of F-Spot now. No other photo management application can do this to date.</p><p><span style="font-weight: bold;">Downloading</span><br /></p><p>The source code is on Gitorious, <a href="http://gitorious.org/%7Eangryziber/f-spot/antons-clone/commits/live_web_gallery/">live_web_gallery</a> branch (until is has been merged to the mainline).</p><p>To install, use the <span style="font-style: italic;">Edit->Manage Extensions</span> menu in the F-Spot, click on <span style="font-style: italic;">Install Add-ins</span> and then <span style="font-style: italic;">Refresh</span>. After that LiveWebGallery should be available under the Tools category.<br /></p><p>Or, alternatively, you can <a href="http://srv2.azib.net/%7Eanton/LiveWebGallery.dll">download the precompiled binary</a> and put it to:<br /><span style="font-family:courier new;">~/.config/f-spot/addins </span>or<span style="font-family:courier new;"> /usr/lib/f-spot/extensions</span></p><p>Note: <span style="font-weight: bold;">F-Spot 0.6</span> is required for it to work. You can already find deb/rpm packages for F-Spot 0.6 or 0.6.1 for most distributions and it will be included in the upcoming distro releases this autumn.<br /></p><p>Hopefully, the extension will later be distributed with newer versions of F-Spot by default.<br /></p><p>Enjoy! Comments are welcome!<br /></p></div>Anonymoushttp://www.blogger.com/profile/10287795376177313666noreply@blogger.com45tag:blogger.com,1999:blog-6458362026107583183.post-45850371512854122392009-06-01T22:51:00.008+03:002009-06-01T23:42:53.174+03:00Database RefactoringA couple of months ago I have made a short keynote titled <span style="font-weight: bold;">Dinosaur Strategies: How Can Data Professionals Still Prosper in Modern Organisations</span>, inspired by <a href="http://www.ambysoft.com/scottAmbler.html">Scott Ambler</a>'s joke on the fictional <a href="http://www.waterfall2006.com/">Waterfall 2006 conference website</a>.<br /><br /><div style="text-align: center;"><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://java.azib.net/2009/dinosaur_strategies.pdf"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer; width: 400px; height: 299px;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg47_mwAoB006Bgvv2IZ2ARik3pjUi7Hxgv4KvdKqa38Yj0Cvzc9pgsAQAwa5I9sFI7NOEt8d20AALwA1nCtJS7PBThhv_4V8Ri5mSI-jkMA2bWUxhK0ZCWmPpVseYi9jplw-jP5Lq7EsY3/s400/dinosaur_strategies.png" alt="Dinosaur Strategies" id="BLOGGER_PHOTO_ID_5342455824025948898" border="0" /><span style="display: block;" id="formatbar_Buttons"></span></a><a href="http://java.azib.net/2009/dinosaur_strategies.pdf">(see the slides)</a><br /></div><br />I primarily deal with 'application' aspects of software development using Agile practices, so I have a hard time understanding how some Data Professionals can be so behind in their evolution, and not doing some basic things like iterative development, unit tests, continuous integration, etc.<br /><br />Last week I was asked to give a talk on <span style="font-weight: bold;">Database Refactoring</span>. The topic seemed challenging enough and as no Database Professionals cared to lead the topic, I decided to give it a try. The result is a motivational speech for both database developers as well as others in the software development process.<br /><br />I have discussed the cultural conflict of database and OOP developers, the problem of refactoring tools available to relational database developers lagging behind, and some solutions to these problems that can help before these tools become available:<br /><br />(1) Development Sandboxes<br />(2) Regression Testing<br />(3) Automatic Changelog, Delta scripts<br />(4) Proper Versioning<br />(5) Continuous integration<br />(6) Teamwork & Cultural Changes<br /><br />Other discussed topics include Refactoring of Stored Code vs Database Schema, Agile Reality, Overspecialization (016n), Database not being under control, Database Smells, Fear of Change, Scenarios, Dealing with Coupling, Dealing with unknown applications, Proper versioning, Continuous Integration using sandboxes, and Delta Scripts (Migrations), which make evolutionary database schema possible.<br /><br />The dinosaurs below are the reminder of my previous keynote available above. They come from the very nice <a href="http://www.youtube.com/watch?v=W0FOZ0-VpcU">Dinosaurs Song</a>, available on YouTube, which I have actually played after the keynote itself.<br /><br />Below are full slides of the Database Refactoring talk.<br /><br /><div style="text-align: center;"><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://java.azib.net/2009/db_refactoring.pdf"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer; width: 400px; height: 301px;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhbU8ugl0aYV01u4XQUit4GtuolwLm8XO2tDjWl0ZEjCBn-6JOgRQBS_EXihNiMNPq_B4aWt82MVJThXOXpNCdO3wBenIlN6IbL93S3cfgevJ8Vu8cJgNRDxv5HgYjNRMrJ5N0eelpno30u/s400/db_refactoring.png" alt="" id="BLOGGER_PHOTO_ID_5342454693556211346" border="0" /></a><a href="http://java.azib.net/2009/db_refactoring.pdf">(click for PDF slides)</a><br /></div>Anonymoushttp://www.blogger.com/profile/10287795376177313666noreply@blogger.com3tag:blogger.com,1999:blog-6458362026107583183.post-15670642094378027152009-05-17T17:02:00.006+03:002009-05-17T18:21:16.242+03:00Versioning your home directory or documents with Git<a style="font-weight: bold;" href="http://git-scm.com/">Git</a> is a relatively new Version Control System, initially <a href="http://www.youtube.com/watch?v=4XpnKHJAok8">started by Linus Torvalds</a> in order to manage the source code of Linux Kernel.<br /><br />Although <a href="http://www.youtube.com/watch?v=8dhZ9BXQgc4">Randal Schwartz has stated</a> that Git was not designed to version your home directory, it seems that many people are now trying to do so :-)<br /><br />Some people have used <span style="font-weight: bold;">CVS</span> or <span style="font-weight: bold;">Subversion</span> for this purpose in the past, but to my mind, Git is suited better for this task for several reasons:<br /><ul><li>Git is <span style="font-style: italic;">grep-friendly</span> (only stores it's metadata in a single .git directory at the root of working copy)</li><li>It is very easy to work with a local repository (just do <span style="font-style: italic;">git init</span> and you're ready)</li><li>Git stores changes very efficiently (even binary files), so not much disk space is wasted, but don't forget to call <span style="font-style: italic;">git gc</span> from time to time</li><li>Git repository is always available on your computer, even when you are <span style="font-style: italic;">offline</span>, but on the other hand it is very easy to <span style="font-style: italic;">push</span> your changes to a remote repository as well<br /></li></ul>All these things are much worse with CVS, which spams all versioned directories with <span style="font-style: italic;">CVS</span> subdirs and stores each version of binary files fully. Subversion also requires more effort to setup, is less storage-efficient, and puts .svn subdirs everywhere.<br /><br />Having said that, my setup is <span style="font-weight: bold;">ultra-simple</span> compared to others on the net!<br /><br />To start versioning your home directory, just run this in the root of your home:<br /><blockquote>git init<br /></blockquote>This will initialize an empty local Git repository in <span style="font-weight: bold;">~/.git</span><span style="font-weight: bold;">/</span> - this is the location that you can use when doing backups, but otherwise you shouldn't care about it anymore.<br /><br />Then you need to tell Git to track your important files:<br /><blockquote>git add Documents<br />git add bin<br />git add <span style="font-style: italic;">whatever else you want to version</span><span style="font-style: italic;"><whatever></whatever></span><br />git commit -m "Adding initial files"<br /></blockquote>Then you can work normally with your tracked files and occasionally commit your changes to the repository with<br /><blockquote>git commit -a "<span style="font-style: italic;">description of changes you have done</span>"<br /></blockquote>Note the <span style="font-style: italic;">"-a"</span> above, that means to commit any changes made to any previously tracked files, so you don't have to use <span style="font-style: italic;">git add</span> again. But don't forget to <span style="font-style: italic;">git add</span> any new files you create before committing.<br /><br />Use <span style="font-style: italic;">git status</span> to show what files were changed since your last commit. Unfortunately, it will also list all untracked files in your home directory, so you may need to create a <span style="font-style: italic;">.gitignore</span> file. You can get the initial version of this file using this command:<blockquote></blockquote><blockquote></blockquote><blockquote></blockquote><blockquote>git status | awk '/#/ {sub("/$", ""); print $2}' > .gitignore</blockquote>then, edit it and possible replace some full names partly with '*'. Don't forget to <span style="font-style: italic;">git add</span> and <span style="font-style: italic;">git commit</span> this file as well!<br /><br />That's, basically, it! You may also try some GUI tools provided by git, eg <span style="font-style: italic;">gitk </span>or <span style="font-style: italic;">git gui</span> to browse your changes and do some changes if you can't remember the commands.<br /><br />Moreover, I have some more ideas how to make all this more automatic that I am going to try laster:<br /><ul><li>Put <span style="font-style: italic;">git commit -a</span> to user's crontab in order to commit changes automatically, eg daily</li><li>Create a couple of nautlus scripts (located in ~/.gnome2/nautilus-scripts) to make adding, comitting and other actions available directly from Nautlilus file manager in Gnome.</li></ul>Happy versioning! And read the Git tutorial with either <span style="font-style: italic;">man gittutorial</span> or <a href="http://www.kernel.org/pub/software/scm/git/docs/gittutorial.html">on the official site</a>.<br /><blockquote></blockquote><blockquote></blockquote>Anonymoushttp://www.blogger.com/profile/10287795376177313666noreply@blogger.com55tag:blogger.com,1999:blog-6458362026107583183.post-37148410528329180652009-04-26T15:00:00.004+03:002009-04-26T15:50:49.917+03:00Excessive memory usage by Oracle driver solvedOn my day job I deal with Internet banking. The <a href="http://www.swedbank.ee/">Internet bank</a> is a relatively large and high-load Java/Spring/Hibernate web application, which uses Oracle databases.<br /><br />During our recent transition from a centralized data accessor (<a href="http://vjdbc.sourceforge.net/">VJDBC</a>) to local JDBC connection pools to reduce data routrip times, we have started having issues with memory usage in our application servers: some requests started to allocate tens to hundreds of megabytes of memory. While Garbage Collector was successfully reclaiming all this memory afterwards (no memory leaks), it still posed a problem of high peak memory usage as well as too frequent collections, also affecting the overall performance.<br /><br />While profiling memory allocations with JProfiler, I have discovered that <span style="font-weight: bold;">OracleStatement.prepareAccessors()</span> is responsible for these monstrous allocations (up to 600 Mb at once, most in either <span style="font-style: italic;">char</span> or <span style="font-style: italic;">byte</span> giant arrays). Google has pointed to this <a href="http://blog.palantirtech.com/2007/02/23/oracles-jdbc-driver-garbage/">nice article on reducing the default prefetch size</a>, describing a very similar situation, however these guys have had problems with queries returning LOBs. We haven't used any LOBs in our problematic queries and haven't modified the <span style="font-weight: bold;">defaultRowPrefetch</span> connection property knowingly.<br /><br />Further investigation led to the way we were using <span style="font-weight: bold;">Hibernate</span>: for some quesries that are expected to return large result sets, we were using the <span style="font-weight: bold;">Query.setFetchSize()</span> or <span style="font-weight: bold;">Criteria.setFetchSize()</span> methods with rather high values (eg 5000). This seemed reasonable, because we were also using the <span style="font-style: italic;">setMaxResults()</span> method with the same value to reduce the maximum length of the returned <span style="font-style: italic;">ResultSet</span>. However, after doing some upgrades of Java, Hibernate, and Oracle driver, this had started having these memory allocation side-effects. It seems that now Hibernate translates this <span style="font-style: italic;">fetchSize</span> parameter directly to OracleStatement's <span style="font-style: italic;">rowPrefetch</span> value, forcing it instantly allocate a <span style="font-style: italic;">rowPrefetch</span><span style="font-style: italic;">*expectedRowSize</span> sized array even before it runs the actual query. This array can be ridicuosly large, even if the actual query returns only a few rows afterwards. Later investigation showed that also having the <span style="font-weight: bold;">batch-size</span> attribute in the Hibernate mapping files (hbm.xml) has exactly the same effect and also results in giant pre-allocations.<br /><br />As a result, we had to review all <span style="font-weight: bold;">batch-size</span> and <span style="font-weight: bold;">setFetchSize()</span> values that we were using with our Hibernate queries and mappings, in most cases reducing them significantly. This would reduce the worst-case performance of some long queries (they would require more roundtrips to the database), but would also reduce the overall amount of garbage accumulating in the heap and thus reduce the frequency of garbage collections, having a positive impact on CPU load. Shorter results would run equally fast, so it makes sense actually to rely on average statictics of the actual responses when chosing optimal <span style="font-style: italic;">rowPrefetch </span>values. The default value is 10, which is hardcoded in the Oracle driver.<br /><br />For longer queries, the abovementioned article has proposed an idea of geometrically increasing the <span style="font-style: italic;">rowPrefetch </span>(setting it twice as big for each subsequent fetch manually). This is a nice idea, but I wonder why Oracle driver can't do this automatically? This is how Java collections behave when they resize themselves. I haven't tried doing this with Hibernate yet, but I think it should be possible, especially if you use the <span style="font-style: italic;">Query.scroll()</span> instead of <span style="font-style: italic;">Query.list()</span>.Anonymoushttp://www.blogger.com/profile/10287795376177313666noreply@blogger.com210tag:blogger.com,1999:blog-6458362026107583183.post-16221767491615534212009-03-08T19:20:00.004+02:002009-03-08T20:35:13.789+02:00Exchange calendar sync to iPhone through EvolutionFinally, I have found an easy and transparent way of syncing my corporate calendar (on the evil MS Exchange server, of course) to my iPhone over-the-air, without involving any manual work. The same recipe can actually work for other mobile phones as well, read on!<br /><br />Current syncing path is as follows:<br /><span style="font-weight: bold;">Exchange → Evolution → </span>(gcaldaemon)<span style="font-weight: bold;"> → Google Calendar → </span>(Google Sync)<span style="font-weight: bold;"> </span><span style="font-weight: bold;">→ </span><span style="font-weight: bold;">iPhone</span><br /><br />If it looks long, don't worry - it is not so difficult in reality.<br /><ol><li>Exchange - this is where the calendar is stored, which very often is located behind a firewall, where it cannot be talked to directly from a mobile phone.<br /></li><li>Exchange is accessed using Evolution Exchange connector (poor Outlook users may find <a href="http://www.google.com/support/calendar/bin/answer.py?hl=en&answer=89955">this helpful</a>).</li><li><a href="http://gcaldaemon.sourceforge.net/">gcaldaemon</a> is an open-source tool for doing various interesting tricks with Google Calendar, including syncing it with Evolution, see below for details.</li><li><a href="http://calendar.google.com/">Google Calendar</a> is a full-featured web-based calendar, where you can store several calendars. It is especially convenient together with GMail, but doesn't require it.</li><li><a href="http://www.google.com/sync">Google Sync</a> is a new service from Google that can sync your Google Calendar (and contacts from GMail) to various programs and mobile devices.</li><li>iPhone is where you get both your personal and corporate calendars, always with you :-)</li></ol>So, actually the keyword here is Google Calendar. As it is quite popular, it is already supported by a lot of software, so you can use it as a middle man in various synching situations, not only the one described here.<br /><br />I have started testing the <a href="http://www.google.com/mobile/apple/sync.html">Google Sync from Google Calendar to my iPhone</a>. Google had a nice idea to implement Exchange ActiveSync protocol, which is already supported by iPhone (probably there are lots of Google employees using iPhones). Now you just need to setup an Exchange account in your phone and configure it to talk to <span style="font-style: italic;">m.google.com</span> instead of an Exchange server. This is another brilliant implementation after GMail started talking <span style="font-style: italic;">IMAP</span> natively. Follow the instructions on the Google website linked above. As a bonus, you will get 2-way syncing of GMail contacts as well if you want. And everything will work via <span style="font-style: italic;">Push</span>, so you will get almost instant updates when you change something on the either the phone or on the web. <span style="font-style: italic;"><br /><br />Push</span> syncing works on the iPhone by keeping <span style="font-style: italic;">HTTP</span> connections open for as long as possible by sending a request and waiting for a response as long as your mobile operator's infrastructure permits (can get up to several hours). During this time no traffic is moving between iPhone and Google, so unless something is changed, no need to pay for any data, which is actually better than polling of server every 10 minutes or so. When changes are available, the server stops blocking the connection and immediately <span style="font-style: italic;">pushes</span> data to the phone, hence the name.<br /><br />Tip: to select which calendars to sync with your phone, navigate to <a href="http://www.google.com/sync">m.google.com/sync</a> with you phone and select what is needed. I have at least 2 calendars there: personal and corporate one - you can conveniently see appointments in different colors.<br /><br />As soon as this works perfectly, all you need to do is get your Exchange (or any other) events to Google Calendar. This is very easy using the <a href="http://gcaldaemon.sourceforge.net/usage.html">gcaldaemon</a>. See their website for lots of usage scenarios. We are currently interested in file-based synchronization with Evolution. Note that Evolution now has native Google Calendar support as well, but this allows you viewing your exising Google Calendar in Evolution, but not syncing your corporate Exchange calendar to Google.<br /><br />Evolution, while talking to Exchange, caches your calendar data in a file called <span style="font-style: italic;">cache.ics</span>. You can find it in:<br /><blockquote>~/.evolution/exchange/exchange___<span style="font-style: italic;">username</span>;auth=Basic@<span style="font-style: italic;">server</span>_;personal_Calendar/cache.ics<br /></blockquote>substitute your own <span style="font-style: italic;">username</span> name <span style="font-style: italic;">server</span> there.<br /><br />All you need is to <a href="http://gcaldaemon.sourceforge.net/usage16.html#top">configure <span style="font-style: italic;">gcaldaemon</span> to monitor this file</a> and send updates to Google Calendar, totally automatically. This way you will get one way sync from Exchange to Google, but this should be enough to not miss your all-important meetings at work, because then your phone will alert you whenever you are. I run <span style="font-style: italic;">gcaldaemon</span> right after Evolution from the same launcher, so I don't have to worry about syncing anymore. For that, I have created the <span style="font-style: italic;">~/bin/evolution</span> file (local <span style="font-style: italic;">bin</span> has priority in PATH, at least on Ubuntu), and this script on execution first runs <span style="font-style: italic;">/usr/bin/evolution</span>, sleeps several seconds and then starts <span style="font-style: italic;">gcaldaemon.</span><br /><br />Google sync actually supports syncing with many mobile devices, including the <span style="font-weight: bold;">iPhone</span>, <span style="font-weight: bold;">Android</span> phones, Nokia Series60 phones with <span style="font-weight: bold;">Symbian</span> (contacts only for now), <span style="font-weight: bold;">Blackberry</span>, and the awkward <span style="font-weight: bold;">Windows Mobile</span>. But even <span style="font-weight: bold;">if you cannot directly sync</span> calendar to you phone, you can ask Google to alert you with <span style="font-style: italic; font-weight: bold;">SMS</span> before each meeting starts, which is almost as good, <span style="font-style: italic;">gcaldaemon </span>will ask Google to do this by default for each event it syncs, provided that Google knows your mobile number. Give this a try - lots of operators are supported worldwide, it's not only US anymore.<br /><br />The only thing that worries now that this just is another step towards Google taking over the World :-)Anonymoushttp://www.blogger.com/profile/10287795376177313666noreply@blogger.com47tag:blogger.com,1999:blog-6458362026107583183.post-88996354395133369972008-12-29T23:55:00.005+02:002008-12-30T00:58:17.745+02:00Human Interface Guidelines<div style="text-align: center;"><br /></div>This is the recent talk I gave at an internal Training Day for developers of <a href="http://www.swedbank.ee/">Swedbank</a>.<br /><br />Although Swedbank Estonia (formely Hansabank) has the best Internet bank in the region <a href="http://www.gfmag.com/index.php?idPage=743">according to Global Finance Magazine</a>, we still strive to develop our usability and user interface skills.<br /><br />The talk was well received and was accompanied with some very nice slides, outlining the history of user interfaces, publicly available HIG documents, usability factors, comm0n design principles and other important points. Therefore it is worth reading for getting a compact introduction to the topic.<br /><br /><a href="http://java.azib.net/2008/human_interfaces.pdf">Human Interface Guidelines talk slides (PDF)</a><br /><br /><div style="text-align: center;"><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg7N4qzphrlkmLJ5DCmVukCVlOPonjovhcn3KxZVk3cNzMMU-M9O_JVX3WAxED5ZbKd1KA9AO1e3svmR3CAqVZFVHBaOtW1bFpGhWA9__KNQf1PoS_bj-UDnC82vlOInv9f9bYEXQ_tmvYn/s1600-h/usability.png"><img style="cursor: pointer; width: 400px; height: 302px;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg7N4qzphrlkmLJ5DCmVukCVlOPonjovhcn3KxZVk3cNzMMU-M9O_JVX3WAxED5ZbKd1KA9AO1e3svmR3CAqVZFVHBaOtW1bFpGhWA9__KNQf1PoS_bj-UDnC82vlOInv9f9bYEXQ_tmvYn/s400/usability.png" alt="" id="BLOGGER_PHOTO_ID_5285349422952414178" border="0" /></a><br /></div><br />This time I tried using a minimalistic style for the slides without any (possibly) distracting backgrounds, inspired by <a href="http://www.nealford.com/mypastconferences.htm">Neal Ford</a>'s excellent talk "Ancient Philosophers & Blowhard Jamrobees" from the <a href="http://www.agile2008.org/">Agile 2008 conference</a>. Neal's slides are mostly back with some nice stock pictures and maybe a few words; they help him <span style="font-weight: bold;">talk</span> and not just read what is written on the screen. This helps the listeners to concentrate on the performance of the speaker instead of being distracted by reading, and pictures help to visualise the concept, greatly increasing the influence on the audience. Of course, this kind of presentations need lots of skills and rehearsal from the speaker.<br /><br />I haven't got as far with eliminating text (read: waste) from the slides, but to my mind this is anyway a great application of the simplicity principle I have talked about. Another principle I also tried to use in the slides (apart from the most important one - consistency) is the aesthetics - people like things that are visually appealing, so consistent style and reasonable animations can make a lot of sense. And please don't use these bundled presentation templates ever again :-)<br /><br />Anyway, I hope that my slides will be useful and can at least spark some interest for researching this ultra important topic further. There are too many crappy user interfaces out there, so developers, keep this in mind!Anonymoushttp://www.blogger.com/profile/10287795376177313666noreply@blogger.com3tag:blogger.com,1999:blog-6458362026107583183.post-19527877113507498502008-12-29T22:49:00.008+02:002008-12-30T00:58:02.949+02:00Is the scanning of computer networks dangerous?Looking through some presentations I have done during the last year, I have come upon a talk I gave at the<span style="text-decoration: underline;"> </span><a href="http://cs.ioc.ee/balt2008/">Baltic DB&IS 2008 Conference</a> in <a href="http://www.tourism.tallinn.ee/eng">Tallinn, Estonia</a>.<br /><br />This bi-annual academic conference takes places usually in either Estonia, Latvia or Lithuania, but often involves speakers from other countries as well. This year it was convenient for me to participate and talk a little about my open-source networking tool <a href="http://www.azib.net/">Angry IP Scanner</a>.<br /><br />The talk was quite successful, although not very academic, but it makes sense to post the slides to give some interesting information on Angry IP Scanner.<br /><br /><div style="text-align: left;"><a style="font-weight: bold;" href="http://java.azib.net/2008/ipscan_balt2008.pdf">Baltic DB&IS 2008 - Is the scanning of computer networks dangerous? (PDF slides)</a><br /></div><br /><br /><div style="text-align: center;"><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://java.azib.net/2008/ipscan_balt2008.pdf"><img style="cursor: pointer; width: 400px; height: 300px;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgvnsc_miTHeFq64qhN2vWWQbivxxnGWV-_5EWqlkshRh8CMnGhMHE1yWIyO5eEeC0K4UTWKdKB5DiimczvYOgxJU1pFpVzYxt5c8Ovpl_wLVbm5xItlTCVmzU4NU9vrDP1t7BHXjYRpYt-/s400/ipscan_db&is.png" alt="" id="BLOGGER_PHOTO_ID_5285322267828920834" border="0" /></a><br /><br />(a nice picture of Tallinn's silhouette is a bonus)<br /><br /></div>Anonymoushttp://www.blogger.com/profile/10287795376177313666noreply@blogger.com10tag:blogger.com,1999:blog-6458362026107583183.post-19120301043723497382008-11-26T22:49:00.007+02:002008-11-30T17:54:54.050+02:00Use LightZone from F-Spot as external editorIn a <a href="http://blog.azib.net/2008/11/opening-files-with-lightzone-from.html">previous post</a> I have shown how to teach <a href="http://www.lightcrafts.com/">LightZone</a>, a non-destructive photo editor, to open files passed from the command-line (for some strange reason it doesn't do it out-of-the-box).<br /><br />Now, I want to be able to use LightZone as an external editor from <a href="http://www.f-spot.org/">F-Spot</a>, which I use for my photo workflow.<br /><br />F-Spot has a convenient <i>Open With</i> menu when right-clicking on a photo, we just need to add LightZone to this menu. After some research, it appear that F-Spot uses the standard <a href="http://standards.freedesktop.org/desktop-entry-spec/latest/">desktop entry specification</a> files to populate this menu. These files can be located either in <tt>/usr/share/applications</tt> or in user's home, <tt>~/.local/share/applications</tt>. You can use either location, but I prefer the latter one, because I have unpacked LightZone into my home as well.<br /><br />Here is the working <tt>lightzone.desktop</tt> file:<blockquote><pre>[Desktop Entry]<br />Version=1.0<br />Type=Application<br />Name=LightZone Photo Editor<br />Exec=LightZone %u<br />TryExec=LightZone<br />Icon=/home/USERNAME/LightZone/LightZone_32.png<br />Terminal=false<br />Categories=Graphics;2DGraphics;Photography;RasterGraphics;GTK;<br />StartupNotify=true<br />MimeType=image/tiff;image/jpeg;image/x-canon-cr2;image/x-canon-crw;image/x-nikon-nef;image/x-pentax-pef;</pre></blockquote><br />This file expects that you have followed the <a href="http://blog.azib.net/2008/11/opening-files-with-lightzone-from.html">previous post</a> and already have <tt>LightZone</tt> in the <tt>PATH</tt> which accepts a filename to open on the command-line.<br /><ul><li><b>Exec</b> - this line specifies what command to run, <tt>%u</tt> means to pass the selected file's URL on the command-line. For some reason with trial and error, I found that F-Spot is only able to pass URLs like this, specifying <tt>%f</tt> doesn't work with F-Spot. But if you look at the LightZoneOpener.java source, you will see that it supports URLs as well as filenames.</li><li><b>Icon</b> - change this one to the full path of LightZone's icon, it supplied in the original archive.</li><li><b>Categories</b> - this specifies where LightZone will appear in the ''Applications'' menu.</li><li><b>MimeType</b> - here you must list all image mime types that you want LightZone to open. This is especially important for RAW files. As I own a Canon camera, I have most of my photos in CR2 format, so I need to be sure that <b>image/x-canon-cr2</b> is in the list. I have also specified CRW, NEF and PEF mime types for older Canon, Nikon and Pentax cameras, respectively. These mime types are already registered in Ubuntu Intrepid (not sure about other distributions). Here is some info on <a href="http://nathanrobertson.blogspot.com/2008/04/fix-f-spot-opening-gimp-for-raw-files.html">how to register new mime types in Gnome</a>, in case your camera's format is not registered yet.<br /></li></ul><br /><div style="text-align: center;"><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj4Ihhnc4u8LMZbLAE-6rXxvImmcKkdy9qd5UYQy0aohHncVv0J200VCgjgEtzzUkK-bTbSmEKPWQgTdUZEfFlLZ_Sf9xDz1MIHreOpLyMtVxcHfcwyglQn3IH8SlTczrE5MiIxpOsVL3Go/s1600-h/OpenWithLightZone.png"><img style="cursor: pointer; width: 400px; height: 295px; text-align: center;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj4Ihhnc4u8LMZbLAE-6rXxvImmcKkdy9qd5UYQy0aohHncVv0J200VCgjgEtzzUkK-bTbSmEKPWQgTdUZEfFlLZ_Sf9xDz1MIHreOpLyMtVxcHfcwyglQn3IH8SlTczrE5MiIxpOsVL3Go/s400/OpenWithLightZone.png" alt="" id="BLOGGER_PHOTO_ID_5273082005125315474" border="0" /></a><br /></div><br />After chosing the <i>Open With->LightZone</i>, F-Spot will ask whether to create a new version for the file. Select 'No' - this won't work with RAW files and LightZone anyway, because LightZone saves files automatically with the <i>_lzn.jpg</i> suffix and F-Spot doesn't know about it.<br /><br />Getting this right requires patching F-Spot (I am going to do that later). For now, you will have to import the saved file manually, if you want it to appear in F-Spot.Anonymoushttp://www.blogger.com/profile/10287795376177313666noreply@blogger.com183tag:blogger.com,1999:blog-6458362026107583183.post-30135842319654987352008-11-24T19:56:00.011+02:002009-09-25T00:15:39.971+03:00Enable HTTP proxy in Gnome automaticallyI have a laptop with Linux (currently Ubuntu) which I use both at home and at work. The corporate security policy requires everyone to use the <b>HTTP proxy server</b> with authentication for web access, so when I come to work I had to manually enable it, and then disable again at home - not very convenient.<br /><br />As a side note, <i>Firefox 3</i>+ is great in respecting the global or system-wide proxy configuration (<i>System->Preferences->Network proxy</i> or <i>gnome-network-preferences</i>) as well as <i>gnome-terminal</i> is very nice to set the <b>http_proxy</b> environment variable automatically when proxy is configured, making most command-line tools respect the global proxy setting as well, which is very cool.<br /><br />So, before network profiles have arrived to <b>Gnome</b> or <b>NetworkManager</b> (I have seen some related commits in Gnome SVN), I still want to enable the proxy automatically depending on my location. Thankfully, <b>NetworkManager</b> supports execution of scripts when it brings interfaces up or down, so this is not difficult at all.<br /><br />At least on Ubuntu, <b>NetworkManager</b> executes the scripts that are located in <tt>/etc/NetworkManager/dispatcher.d/</tt> when it brings interfaces up. Inside of the script I can detect whether I am at work by checking the domain name in /etc/resolv.conf provided by the corporate <span style="font-style: italic;">DHCP server</span>, or the beginning of the assigned <span style="font-style: italic;">IP address</span> if domain can't be used for any reason.<br /><br />OK, here is the working script for <b>Ubuntu Karmic, Jaunty </b>and<b> Intrepid</b> (Gnome 2.24+), see notes below for older versions. I have this script in <tt>/etc/NetworkManager/dispatcher.d/02proxy</tt>, because <tt>01ifupdown</tt> already exists there.<br /><br />It is an updated version, attempting to make the script suitable for more general use, eg in our company we now provide it in a <span style="font-style: italic;">.deb</span> package for all Ubuntu-based laptops.<br /><br /><blockquote><pre>#!/bin/bash<br /># The script for automatically setting the proxy server depending on location.<br /># Put it under <span style="font-weight: bold;">/etc/NetworkManager/dispatcher.d/02proxy</span><br /># Create also the <span style="font-weight: bold;">/etc/NetworkManager/proxy_domains.conf</span>, specifying the mapping of<br /># DHCP domains to proxy server addresses, eg "example.com proxy.example.com:3128"<br /># Written by Anton Keks<br /><br />PROXY_DOMAINS="/etc/NetworkManager/proxy_domains.conf"<br /><br /># provided by NetworkManager<br />INTERFACE=$1<br />COMMAND=$2<br /><br />function gconf() {<br /> sudo -E -u $USER gconftool-2 "$@"<br />}<br /><br />function saveUserConfFile() {<br /> echo "DOMAIN_USER=$DOMAIN_USER" > $CONF_FILE;<br /> echo "DOMAIN_PWD_BASE64="`echo $DOMAIN_PWD | base64` >> $CONF_FILE;<br /> echo "PROXY_HOST=$PROXY_HOST" >> $CONF_FILE;<br /> echo "PROXY_PORT=$PROXY_PORT" >> $CONF_FILE;<br />}<br /><br />function enableProxy() {<br /> PROXY_HOST=`cat $PROXY_DOMAINS | grep $DOMAIN | sed 's/.* \+//' | sed 's/:.*//'`<br /> PROXY_PORT=`cat $PROXY_DOMAINS | grep $DOMAIN | sed 's/.*://'`<br /><br /> # check if authentication is required<br /> http_proxy=http://$PROXY_HOST:$PROXY_PORT/ wget com 2>&1 | grep "ERROR 407"<br /> if [ $? -eq 0 ]; then<br /> AUTH_REQUIRED="true"<br /> CONF_FILE=$HOME/.proxy:$DOMAIN<br /><br /> if [ ! -e $CONF_FILE ]; then<br /> DOMAIN_USER=`sudo -E -u $USER zenity --entry --text "Login name for domain $DOMAIN"`<br /> DOMAIN_PWD=`sudo -E -u $USER zenity --entry --text "Password for domain $DOMAIN" --hide-text`<br /> saveUserConfFile<br /> fi<br /><br /> # load user proxy settings<br /> . $CONF_FILE<br /> # decode password<br /> DOMAIN_PWD=`echo $DOMAIN_PWD_BASE64 | base64 -d`<br /> <br /> # get Kerberos ticket (if it's configured)<br /> if echo $DOMAIN_PWD | sudo -E -u $USER kinit $DOMAIN_USER; then<br /> KLIST_INFO=`sudo -E -u $USER klist | fgrep Default`<br /> sudo -E -u $USER notify-send -i gtk-info "Domain login" "Kerberos ticket retrieved successfully: $KLIST_INFO"<br /> fi<br /> else <br /> AUTH_REQUIRED="false"<br /> fi<br /><br /> # setup proxy<br /> gconf --type string --set /system/proxy/mode "manual"<br /> gconf --type bool --set /system/http_proxy/use_http_proxy "true"<br /> gconf --type string --set /system/http_proxy/host $PROXY_HOST<br /> gconf --type int --set /system/http_proxy/port $PROXY_PORT<br /> gconf --type bool --set /system/http_proxy/use_same_proxy "true" <br /> gconf --type bool --set /system/http_proxy/use_authentication $AUTH_REQUIRED<br /> gconf --type string --set /system/http_proxy/authentication_user $DOMAIN_USER<br /> gconf --type string --set /system/http_proxy/authentication_password $DOMAIN_PWD<br /><br /> # notify<br /> sudo -E -u $USER notify-send -i gtk-info "Proxy configuration" "Your proxy settings have been set to: $DOMAIN_USER@$PROXY_HOST:$PROXY_PORT"<br />}<br /><br />function disableProxy() {<br /> gconf --type string --set /system/proxy/mode "none"<br /> gconf --type bool --set /system/http_proxy/use_http_proxy "false"<br /> gconf --type string --set /system/http_proxy/host ""<br /> gconf --type bool --set /system/http_proxy/use_authentication "false"<br /> gconf --type string --set /system/http_proxy/authentication_user ""<br /> gconf --type string --set /system/http_proxy/authentication_password ""<br />}<br /><br /># wait for gnome-settings-daemon to appear, ie until user logs in<br />for i in {1..100}; do<br /> if [ ! `pidof gnome-settings-daemon` ]; then<br /> sleep 5;<br /> echo "Waiting for gnome-settings-daemon to appear..."<br /> else<br /> break <br /> fi<br />done<br />if [ ! `pidof gnome-settings-daemon` ]; then<br /> echo "gnome-settings-daemon is not running. exiting."<br /> exit 1<br />fi<br /><br /># steal environment from the current non-root user<br />XENV=`xargs -n 1 -0 echo </proc/$(pidof gnome-settings-daemon)/environ`<br /># init DBUS connection string in order to reach gconfd<br />eval export `echo "$XENV" | fgrep DBUS_SESSION_BUS_ADDRESS=`<br />eval export `echo "$XENV" | fgrep USER=`<br />eval export `echo "$XENV" | fgrep HOME=`<br />eval export `echo "$XENV" | fgrep DISPLAY=`<br />eval export `echo "$XENV" | fgrep XAUTHORITY=`<br /><br />if [ $COMMAND != 'up' ]; then<br /> disableProxy;<br /> exit<br />fi<br /><br />DOMAIN=`cat /etc/resolv.conf | grep domain | sed 's/domain \+//'`<br /># check if we need to set proxy settings for this domain<br />if [[ -e $PROXY_DOMAINS && ! `cat $PROXY_DOMAINS | grep $DOMAIN` ]]; then<br /> echo "Proxy is not required for domain $DOMAIN"<br /> disableProxy<br />else<br /> echo "Setting proxy for domain $DOMAIN"<br /> enableProxy<br />fi<br /></pre></blockquote><br />Don't forget to:<ul><li>give this script execute permissions</li><li>have <i>gconftool-2</i>, <span style="font-style: italic;">zenity</span> and <span style="font-style: italic;">kinit</span> installed (<span style="font-weight: bold;">gconf2</span>, <span style="font-weight: bold;">zenity</span>, <span style="font-weight: bold;">krb5-user</span> packages in Ubuntu). Install <i>gconf-editor</i> as well for a graphical config editor.</li><li>create <span style="font-weight: bold;">/etc/NetworkManager/proxy_domains.conf</span>, specifying the mapping of DHCP domains to proxy server addresses, eg "example.com proxy.example.com:3128". Specify each domain on a new line.<br /></li></ul>The script doesn't need you to hardcode your username and the proxy password anymore - the script will ask you for these values on first run and then store them in <span style="font-style: italic;">$HOME/.proxy:$DOMAIN</span> file, so the script is now perfectly usable on multiuser machines and doens't bug you in case of 'unknown' domains.<br /><br />For more functionality, it even tries to retrieve the Kerberos ticket for you, if the kerberos is configured properly in <span style="font-style: italic;">/etc/krb5.conf</span>. You can check if this is the case by running this on the command-line:<br /><blockquote><pre>kinit your-user-name; klist</pre></blockquote>This works very well for me and saves several mouse clicks every morning :-)<br /><br />Note to <b>Gnome 2.22</b> and older users (<b>Ubuntu Hardy</b>, etc): I had this script initially done in <b>Hardy</b>, but after upgrading to <b>Intrepid (Gnome 2.24)</b> it stopped working. The reason was that starting from <b>Gnome 2.24</b>, the gconf setting of <tt>/system/http_proxy/use_http_proxy</tt> is not the primary one and has been replaced by <tt>/system/proxy/mode</tt>, which takes one of three values: 'auto', 'manual' and 'none'. In <b>Intrepid</b>, if you set only <tt>/system/http_proxy/use_http_proxy</tt> as before - it has no effect, you need to set <tt>/system/proxy/mode</tt> to <i>manual</i>, and this will set the value of the old setting to 'true' automatically.<br /><br />Another thing introduced with <b>Intrepid</b> is the need to set the <tt>DBUS_SESSION_BUS_ADDRESS</tt> environment variable (the script steals it from the <tt>x-session-manager</tt> process) - this is because gconfd has switched to DBUS from CORBA for a communication protocol. If you have older Gnome, then you may omit these 2 lines involving DBUS.<br /><br />Enjoy!Anonymoushttp://www.blogger.com/profile/10287795376177313666noreply@blogger.com200tag:blogger.com,1999:blog-6458362026107583183.post-17853637091574498732008-11-12T23:29:00.005+02:002008-11-26T22:44:19.014+02:00Opening files with LightZone from command-line<a href="http://www.lightcrafts.com/">LightZone</a> is a very useful commercial photo editor with some unique features like non-destructive and layer-based editing. To my mind, the developers has taken a very clever approach to save resulting edits inside of smaller-size JPEG files (thumbnails), so that any program can be used for previewing the resulting image, but opening of the file in LightZone will load the original image and show all the edits again with the possibility to make any changes and export a full-resolution image. The cool thing here is that edited files are very small especially when compared to 10Mb+ source RAW photos, contain all the editing history and can be previewed quickly with any software. And all this runs on Linux (thanks to Java - write once, run almost anywhere).<br /><br />The only problem with LightZone (at least the Linux version) is that it doesn't accept filenames from the command-line! You have to start the program and select the file manually using the embedded file browser. Of course, this is not an option if you want to run LightZone from another application as an external editor, eg from <a href="http://www.f-spot.org/">F-Spot</a> (more on this in a later post).<br /><br />To make a long story short, I have written a small Java program that takes a filename on the command-line and then modifies LightZone preferences, so when you run it next time it will directly open the specified image.<br /><br />Here it is:<blockquote><pre>import java.io.*;<br />import java.net.*;<br />import java.util.prefs.*;<br /><br />/**<br /> * LightZoneOpener - will modify LightZone preferences to open the <br /> * specified file (image) on next startup.<br /> * This is useful to force LightZone to open a particular file from <br /> * the command-line, just run this code before starting LightZone. <br /> *<br /> * @author Anton Keks<br /> */<br />public class LightZoneOpener {<br /><br /> public static void main(String[] args) throws Exception {<br /> if (args.length != 1) {<br /> System.err.println("Please specify filename to open in LightZone");<br /> System.exit(1);<br /> }<br /> String filename = args[0];<br /> if (filename.startsWith("file:"))<br /> filename = new URI(filename).getPath();<br /> File file = new File(filename).getCanonicalFile();<br /> if (!file.exists()) {<br /> System.err.println(file + " doesn't exist!");<br /> System.exit(2);<br /> }<br /> File fileDir = file.getParentFile();<br /> <br /> // set image folder as current one<br /> Preferences folderPrefs = Preferences.userRoot().node("com/lightcrafts/ui/browser/folders");<br /> int i = 0; File dir = file;<br /> while ((dir = dir.getParentFile()) != null) {<br /> folderPrefs.put("BrowserTreePath" + i++, dir.getName().isEmpty() ? "/" : dir.getName());<br /> }<br /> folderPrefs.remove("BrowserTreePath" + i);<br /> <br /> // set selected image in the current folder<br /> Preferences appPrefs = Preferences.userRoot().node("com/lightcrafts/app");<br /> appPrefs.put("BrowserSelectionMemory" + fileDir.getPath().hashCode(), file.getPath());<br /> // tell LightZone that last startup was OK just in case<br /> appPrefs.put("StartupSuccessful", "true");<br /> <br /> System.out.println("LightZone is now ready to open " + file + " on next start");<br /> }<br />}</pre></blockquote><br />Here is what to do:<br /><ol><li>save it to LightZoneOpener.java<br /></li><li>compile with <span style="color: rgb(153, 153, 153);">javac LightZoneOpener.java</span></li><li>run with <span style="color: rgb(153, 153, 153);">java LightZoneOpener</span></li></ol><br />Then you can create a small script that will automate the stuff for you (save it to ~/bin/LightZone):<blockquote><pre>#!/bin/bash<br />java -cp ~/bin LightZoneOpener "$@"<br />~/LightZone/LightZone</pre></blockquote><br />This assumes that you have extracted LightZone to <tt>~/LightZone</tt> (in your home dir) and have the following two files in the <tt>~/bin</tt> dir: the compiled <tt>LightZoneOpener.class</tt> and the script file LightZone (don't forget to set the execute permission with <tt>chmod a+x ~/bin/LightZone</tt>)<br /><br /><a href="http://srv2.azib.net/~anton/LightZoneOpener.tar">Here is all this pre-compiled</a>. Just extract the file directly in your home and it will put all needed files into the bin directory. After next login your local bin will be in $PATH, so you will be able to use it.<br /><br />Now you can run <tt>LightZone filename</tt> on the command-line in Linux (or using <i>Alt+F2</i>)! Have fun!Anonymoushttp://www.blogger.com/profile/10287795376177313666noreply@blogger.com28tag:blogger.com,1999:blog-6458362026107583183.post-91609265611058860722007-06-24T17:17:00.001+03:002008-11-25T23:39:53.456+02:00Anti-antivirus, someone?Many years ago, in good old DOS days, I thought that antivirus software was great. In these days it was common for antiviruses to start on system startup and scan drives (thankfully, the drives weren't that large).<br /><br />In these days, most viruses could be considered a great work - most of them were technically difficult to write, were introducing very interesting tricks for infection and stealth techniques. However, after Windows OSs and the Internet became popular, a whole lot of new possibilities has appeared, how to break into computers. These times have brought trojans and worms to the attention of the public, which has mostly shaded viruses. Now, we receive a lot of email worms written by script-kiddies and call them 'viruses'.<br /><br />Of course, as <a href="http://en.wikipedia.org/wiki/Malware">malware</a> transformed, antivirus software<br />needed to follow up: they started to detect a lot more things that can be potentially harmful to poor computer users. Moreover, as these computer users don't usually understand what are they doing and what is dangerous and what is not, antivitus software vendors has taken a new mission: to protect from everything.<br /><br />This is arguably a good mission, but IMHO it has started getting in the way too much. Nowadays, antivirus scanners have their hooks everywhere in the system: they monitor network traffic, they scan every opened and yet unopened file, they slow down the computer a whole lot. Sometimes it feels that we are back a few years in terms of computer performance - it is like <a href="http://en.wikipedia.org/wiki/Moores_law">Moore's law</a> seems to be broken.<br /><br />However, as everybody knows, nowadays the world is being controlled by money - so are the antivirus scanners. In order to make more profit, they tend to frighten the users by 'detecting' all kinds of stuff they may have on their computers and saying that it is dangerous. Moreover, this also has a side-effect of increasing the size of their 'virus' databases: "We detect 75,000 viruses, we are the greatest!". I wonder how many of these are REAL viruses! Even worse, now antivirus makers are close to controlling the world in some way - they tell users what they can use and what they cannot. They delete software without warnings.<br /><br />Recently, McAfee and then Symantec started 'detecting' my open-source tool <a href="http://www.azib.net/ipscan/">Angry IP Scanner</a>. McAfee was the first, but they even didn't give any explanations. After long email discussions with them they told me that this is a 'potentially unwanted program' for their users and therefore it must be deleted. No matter that it is open-source, no matter that it has no installers and is never distributed automatically (the only way is to download it manually) nor it somehow abuses the system. It is just a tiny little exe file. If a user doesn't want - they can just hit the Delete button - and it's gone! Later, the trend was followed by Symantec. They at least have provided some information and classified the program as 'hacktool'. See <a href="http://securityresponse.symantec.com/security_response/writeup.jsp?docid=2003-092618-3023-99&tabid=1">their description here</a>.<br /><br />Actually, they both have hit a lot of their customers. There are thousands of thankful users of Angry IP Scanner around the world, especially among network administrators. I have got a lot of emails asking me to 'fix' this problem, but unfortunately I can't: antivirus makers, seeking for their profits, just don't listen to me.<br /><br />So, if you are a user of antivirus software, please help to stop the evil: tell your vendors that they have gone too far. They are taking our freedom and ruining our computers.<br /><br />I hope that antiviruses will never become popular on Linux. Almost complete lack of malware dangerous to regular users as well as freedom make it an unbeatable choice. Happy switching and fighting for the freedom in this imperfect cruel world! :-)Anonymoushttp://www.blogger.com/profile/10287795376177313666noreply@blogger.com6