Showing posts with label Gnome. Show all posts
Showing posts with label Gnome. Show all posts

Sunday, November 27, 2011

Reusing Shotwell thumbnails in Nautilus

As I have lots of photos on my machine, thumbnails start to consume considerable amount of space on the disk.

Another problem, is that gnome-raw-thumbnailer isn't enabled in Ubuntu (Natty, Oneiric) by default anymore, so my raw photos don't get thumbnailed in Nautilus. And, if I enable it manually, thumbnails of vertical photos don't show with the correct orientation.

So, I have researched a bit the freedesktop thumbnail spec, gnome thumbnailer spec and how Shotwell stores its thumbnails and came up with a shell script that reuses Shotwell thumbnails for Nautilus.

Save the script below as /usr/bin/shotwell-raw-thumbnailer

#!/bin/bash
input=$1
output=$2

if [ -z $output ]; then
    echo "Usage: $0 input output"
    exit 1
fi

file=`echo -n ${input##file://} | perl -pe 's/%([0-9a-f]{2})/sprintf("%s", pack("H2",$1))/eig'`
md5=`echo -n $input | md5sum | awk '{print $1}'`

shotwell_id=`sqlite3 ~/.shotwell/data/photo.db "select id from PhotoTable where filename = '$file'"`
if [ -z $shotwell_id ]; then
    gnome-raw-thumbnailer $input $output
    exit
fi

thumb=`printf ~/.shotwell/thumbs/thumbs128/thumb%016x.jpg $shotwell_id`
if [ \! -e $thumb ]; then
    gnome-raw-thumbnailer $input $output
    exit
fi

replaceWithLink() {
    sleep 1
    ln -sf $thumb ~/.thumbnails/normal/$md5.png
}

# gnome-thumbnail-factory doesn't support links
cp $thumb $output

# however, linked thumbnails work, so replace them after a delay
replaceWithLink &

In order to make it work, you then need to register it as a thumbnailer in Gnome, put this to /usr/share/thumbnailers/shotwell.thumbnailer
[Thumbnailer Entry]
Exec=/usr/bin/shotwell-raw-thumbnailer %u %o
MimeType=image/x-3fr;image/x-adobe-dng;image/x-arw;image/x-bay;image/x-canon-cr2;image/x-canon-crw;image/x-cap;image/x-cr2;image/x-crw;image/x-dcr;image/x-dcraw;image/x-dcs;image/x-dng;image/x-drf;image/x-eip;image/x-erf;image/x-fff;image/x-fuji-raf;image/x-iiq;image/x-k25;image/x-kdc;image/x-mef;image/x-minolta-mrw;image/x-mos;image/x-mrw;image/x-nef;image/x-nikon-nef;image/x-nrw;image/x-olympus-orf;image/x-orf;image/x-panasonic-raw;image/x-pef;image/x-pentax-pef;image/x-ptx;image/x-pxn;image/x-r3d;image/x-raf;image/x-raw;image/x-rw2;image/x-rwl;image/x-rwz;image/x-sigma-x3f;image/x-sony-arw;image/x-sony-sr2;image/x-sony-srf;image/x-sr2;image/x-srf;image/x-x3f;

So, what does this script do?
  • When Gnome (or Nautilus) needs a thumbnail, it runs this script
  • The script checks if the image has an entry in the Shotwell database (~/.shotwell/data/photo.db)
  • Then it checks if Shotwell has a thumbnail for it (in ~/.shotwell/thumbs)
  • If yes, the script returns the already generated thumbnail to Gnome - no generation needed, so it works much faster
  • If Shotwell doesn't have the thumbnail, the call is delegated to gnome-raw-thumbnailer that generates a new thumbnail, the old-fashioned way
  • If Shotwell's thumbnail was used, the script will asynchronously replace the thumbnail in ~/.thumbnails with the link to Shotwell's file, avoiding a copy on the disk

The last step is the one that saves disk space. Unfortunately, it is not possible to return a link right away to Gnome - it can't read it for some reason. However, by putting a link directly under ~/.thumbnails later works perfectly, even if we put a .jpg file under the name of .png (as required by a spec). Png is actually a worse choice for thumbnailing of photos due to its lossless compression, so the disk savings are more than twofold with this script.

The next step would be to rewrite this in C or Vala to make even faster and maybe even make Shotwell create these links right away when it generates the thumbnails.


Tuesday, October 19, 2010

Simple Rsync GUI: easy backups from Nautilus

Most often, making backups of your important files is a manual process. Especially if you are dealing with large collections of photos.

In the meantime I have written a small and convenient Nautilus script (for Gnome users) for doing exactly that.

Features:

  • Syncs to any mounted location or over SSH (everything that rsync supports)
  • Remembers previously used locations
  • Preview of changes (any deletions are shown first, but performed the last)
  • Nice progress bar with upload speed display

Everything is written as a simple bash script using Zenity for GTK GUI - just drop it to ~/.gnome2/nautilus-scripts directory, and it appear in Nautilus right-click menu, under Scripts.




Don't forget - this all is just a frontend for rsync (that you are too lazy to run from command-line).

Dependencies: nautilus, zenity, rsync, bash

And now, here is the source (save to ~/.gnome2/nautilus-scripts/Sync):
#!/bin/bash
# Nautilus script to sync specified folder to another destination via rsync.
# Put this to ~/.gnome2/nautilus-scripts
# Written by Anton Keks (BSD license)

paths_file=$(readlink -f $0).paths
locations=`cat $paths_file`
sources=`cat $paths_file | awk -F'|' '{print $1}'`

if [ "$1" ]; then
  source=$1 
else
  # add current directory also to the list
  sources=`echo -e "$sources\\n$PWD" | sort -u`
  # ask user to chose one of the sources
  source=`zenity --list --title="Sync source" --text="No source was specified. Please choose what do you want to sync" --column=Source "$sources" Other...` || exit 1
  if [ "$source" = Other... ]; then
    source=`zenity --entry --title="Sync source" --text="Please enter the source path on local computer" --entry-text="$PWD"` || exit 1
  fi
fi

# normalize and remove trailing /
source=`readlink -f "$source"`
source=${source%/}

if [ ! -d "$source" ]; then
  zenity --error --text="$source is not a directory"; exit 2
fi

if [ $2 ]; then
  # TODO: support multiple sources
  zenity --warning --text="Only one directory can be synched, using $source"
fi

# find matching destinations from stored ones
destinations=""
for s in $sources; do
  if echo "$source" | fgrep $s; then
    dest=`fgrep "$s" $paths_file | awk -F'|' '{print $2}'`
    suffix=${source#$s}
    suffix=${suffix%/*}
    destinations="$destinations $dest$suffix" 
  fi
done

# ask user to chose one of the matching destinations of enter a new one
dest=`zenity --list --title="Sync destination" --text="Choose where to sync $source" --column=Destination $destinations New...` || exit 3
if [ $dest = New... ]; then
  basename=`basename "$source"`
  dest=`zenity --entry --title="Sync destination" --text="Please enter the destination (either local path or rsync's remote descriptor), omitting $basename" --entry-text="user@host:$(dirname $source)"` || exit 3
  echo "$source|$dest" >> $paths_file
fi

# check if user is not trying to do something wrong with rsync
if [ `basename "$source"` = `basename "$dest"` ]; then
  # sync contents of source to dest
  source="$source/"
fi

log_file=/tmp/Sync.log
rsync_opts=-rltEorzh
echo -e "The following changes will be performed by rsync (see man rsync for info on itemize-changes):\\n$source -> $dest\\n" > $log_file
( echo x; rsync -ni $rsync_opts --delete "$source" "$dest" 2>&1 >> $log_file; rsync_result=$? ) | zenity --progress --pulsate --auto-close --width=350 --title="Retrieving sync information" 

if [ $rsync_result -ne 0 ]; then
  zenity --error --title="Sync" --text="Rsync failed: `cat $log_file`"; exit 4
fi

num_files=`cat $log_file | wc -l`
num_files=$((num_files-3))

if [ $num_files -le 0 ]; then
  zenity --info --title="Sync" --text="All files are up to date on $dest"; exit
fi

zenity --text-info --title="Sync review ($num_files changes)" --filename=$log_file --width=500 --height=500 || exit 4

num_deleted=`fgrep delet $log_file | wc -l`
if [ $num_deleted -ge 100 ]; then
  zenity --question --title="Sync" --text="$num_deleted files are going to be deleted from $dest, do you still want to continue?" --ok-label="Continue" || exit 4
fi

rsync_progress_awk="{ 
 if (\$0 ~ /to-check/) {
  last_speed=\$(NF-3)
 }
 else {
  print \"#\" \$0 \" - \" files \"/\" $num_files \" - \" last_speed;
  files++;
  print files/$num_files*100 \"%\";
 }
 fflush();
}
END {
 print \"#Done, \" files \" changes, \" last_speed
}"

# note: delete-delay below means that any files will be deleted only as a last step
rsync $rsync_opts --delete-delay --progress "$source" "$dest" | awk "$rsync_progress_awk" | zenity --progress --width=350 --title="Synchronizing $source" || exit 4


Thursday, December 17, 2009

Deleting thumbnails for inexisting photos

Freedesktop for some years already has a spec on how applications should manage image thumbnails (use Next link there). The spec is now followed by majority of Gnome and KDE applications, including F-Spot, which is one of the very few applications that uses large 256x256 thumbnails under ~/.thumbails/large.

The spec specifies to store thumbnails in PNG format, naming the files after the MD5 sum of the original URLs of the original files, eg 81347ce6c37f75513c5e517e5b1895b8.png.

The problem with the spec is that if you delete or move image files, thumbnails stay there and take space (for my 20000+ photos I have 1.4Gb of large thumbails).

Fortunately, you can from time to time clean them by using simple command-line tricks, as the original URLs are stored inside of thumbnail files as Thumb:URI attributes. I don't recommend erasing all of your thumbnails, because regeneration will take time.

In order to create a list of matching thumbnail-original URL pairs, you can run the following in a terminal inside of either .thumbnails/large or .thumbnails/normal directories (it will take some time):

for i in `ls *.png`; do
identify -verbose "$i" | \
fgrep Thumb::URI | sed "s@.*Thumb::URI:@$i@" >> uris.txt;
done
This will get you a uris.txt file, where each line looks like the following:
f78c63184b17981fddce24741c7ebd06.png file:///home/user/Photos/2009/IMG_5887.CR2
Note that the provided thumbnail filenames (first tokens) can also be generated the following way from the URLs (second tokens) using MD5 hashes:
echo -n file:///home/user/Photos/2009/IMG_5887.CR2 | md5sum
After you have your uris.txt file, it can be easily processed with any familiar command-line tools, like grep, sed, awk, etc.

For example, in order to delete all thumbnails matching 'Africa', use the following:
for i in `cat uris.txt | fgrep Africa`; do rm $i >/dev/null; done
So, as you can see, it is pretty simple to free a few hundred megabytes (depending on the number of thumbnails you are deleting).
With this kind of trick you can even rename the thumbnails of moved files if you use md5sum to generate the new filenames from the URLs, as shown above. This will save you regeneration time.


Wednesday, August 26, 2009

Announcing F-Spot Live Web Gallery extension

I am happy to announce a new extension for F-Spot, the popular Linux photo management application - LiveWebGallery. Once installed, invoke it from the Tools menu in F-Spot's main window.

The extension contains a minimal web server implementation that serves the user's gallery over HTTP and can be viewed with any web browser, even on Mac and Windows. So now you are able to easily share your photos with family, friends, colleagues no matter what operating system and software they use by doing just a few mouse clicks in F-Spot. The only requirement is that they have to be on the same network, or be able to access your machine's IP address in some other way.

As you can see in the screenshot, you can choose whether to share photos with a particular tag, current view in F-Spot (allows you to create an arbitrary query) or the currently selected photos.

To activate the gallery (start the embedded lightweight web server), just click activate button in the top-right corner. On activation, the URL of the web gallery will appear, allowing you either to open it yourself or copy the link and provide to other viewers.

After that all the options can still be changed in the dialog and will affect all new viewers or those pressing the browser's reload button.

Most of us already know that many pictures are rarely viewed after they are made (Point, Shoot, Kiss It Goodbye). F-Spot tries to fix this with its very powerful tagging features - tags make it much easier to find photos made long ago. This, however, is no magic - the possibilities of finding the right photos when needed depends on how well you tag. Now, this extension allows to make tagging even more useful, because other people can help you with the most difficult part - properly tagging can sometimes be a lot of work. With this extension, you can delegate some it to other people! The gallery is not read-only - if you choose so, an editable tag can be selected and viewers can add/remove this tag from photos (currently only in the full photo view). This is especially useful to let other people to tag themselves in your library. For security reasons, editing is disabled by default.

As time goes by, a lot more features can be added to Live Web Gallery extension, especially related to editing photo metadata (tagging, editing descriptions, flagging for deletion).

As far as I know, being able to share your photos on the local network without any software or OS requirements is a unique feature of F-Spot now. No other photo management application can do this to date.

Downloading

The source code is on Gitorious, live_web_gallery branch (until is has been merged to the mainline).

To install, use the Edit->Manage Extensions menu in the F-Spot, click on Install Add-ins and then Refresh. After that LiveWebGallery should be available under the Tools category.

Or, alternatively, you can download the precompiled binary and put it to:
~/.config/f-spot/addins or /usr/lib/f-spot/extensions

Note: F-Spot 0.6 is required for it to work. You can already find deb/rpm packages for F-Spot 0.6 or 0.6.1 for most distributions and it will be included in the upcoming distro releases this autumn.

Hopefully, the extension will later be distributed with newer versions of F-Spot by default.

Enjoy! Comments are welcome!


Sunday, May 17, 2009

Versioning your home directory or documents with Git

Git is a relatively new Version Control System, initially started by Linus Torvalds in order to manage the source code of Linux Kernel.

Although Randal Schwartz has stated that Git was not designed to version your home directory, it seems that many people are now trying to do so :-)

Some people have used CVS or Subversion for this purpose in the past, but to my mind, Git is suited better for this task for several reasons:

  • Git is grep-friendly (only stores it's metadata in a single .git directory at the root of working copy)
  • It is very easy to work with a local repository (just do git init and you're ready)
  • Git stores changes very efficiently (even binary files), so not much disk space is wasted, but don't forget to call git gc from time to time
  • Git repository is always available on your computer, even when you are offline, but on the other hand it is very easy to push your changes to a remote repository as well
All these things are much worse with CVS, which spams all versioned directories with CVS subdirs and stores each version of binary files fully. Subversion also requires more effort to setup, is less storage-efficient, and puts .svn subdirs everywhere.

Having said that, my setup is ultra-simple compared to others on the net!

To start versioning your home directory, just run this in the root of your home:
git init
This will initialize an empty local Git repository in ~/.git/ - this is the location that you can use when doing backups, but otherwise you shouldn't care about it anymore.

Then you need to tell Git to track your important files:
git add Documents
git add bin
git add whatever else you want to version
git commit -m "Adding initial files"
Then you can work normally with your tracked files and occasionally commit your changes to the repository with
git commit -a "description of changes you have done"
Note the "-a" above, that means to commit any changes made to any previously tracked files, so you don't have to use git add again. But don't forget to git add any new files you create before committing.

Use git status to show what files were changed since your last commit. Unfortunately, it will also list all untracked files in your home directory, so you may need to create a .gitignore file. You can get the initial version of this file using this command:
git status | awk '/#/ {sub("/$", ""); print $2}' > .gitignore
then, edit it and possible replace some full names partly with '*'. Don't forget to git add and git commit this file as well!

That's, basically, it! You may also try some GUI tools provided by git, eg gitk or git gui to browse your changes and do some changes if you can't remember the commands.

Moreover, I have some more ideas how to make all this more automatic that I am going to try laster:
  • Put git commit -a to user's crontab in order to commit changes automatically, eg daily
  • Create a couple of nautlus scripts (located in ~/.gnome2/nautilus-scripts) to make adding, comitting and other actions available directly from Nautlilus file manager in Gnome.
Happy versioning! And read the Git tutorial with either man gittutorial or on the official site.


Wednesday, November 26, 2008

Use LightZone from F-Spot as external editor

In a previous post I have shown how to teach LightZone, a non-destructive photo editor, to open files passed from the command-line (for some strange reason it doesn't do it out-of-the-box).

Now, I want to be able to use LightZone as an external editor from F-Spot, which I use for my photo workflow.

F-Spot has a convenient Open With menu when right-clicking on a photo, we just need to add LightZone to this menu. After some research, it appear that F-Spot uses the standard desktop entry specification files to populate this menu. These files can be located either in /usr/share/applications or in user's home, ~/.local/share/applications. You can use either location, but I prefer the latter one, because I have unpacked LightZone into my home as well.

Here is the working lightzone.desktop file:

[Desktop Entry]
Version=1.0
Type=Application
Name=LightZone Photo Editor
Exec=LightZone %u
TryExec=LightZone
Icon=/home/USERNAME/LightZone/LightZone_32.png
Terminal=false
Categories=Graphics;2DGraphics;Photography;RasterGraphics;GTK;
StartupNotify=true
MimeType=image/tiff;image/jpeg;image/x-canon-cr2;image/x-canon-crw;image/x-nikon-nef;image/x-pentax-pef;

This file expects that you have followed the previous post and already have LightZone in the PATH which accepts a filename to open on the command-line.
  • Exec - this line specifies what command to run, %u means to pass the selected file's URL on the command-line. For some reason with trial and error, I found that F-Spot is only able to pass URLs like this, specifying %f doesn't work with F-Spot. But if you look at the LightZoneOpener.java source, you will see that it supports URLs as well as filenames.
  • Icon - change this one to the full path of LightZone's icon, it supplied in the original archive.
  • Categories - this specifies where LightZone will appear in the ''Applications'' menu.
  • MimeType - here you must list all image mime types that you want LightZone to open. This is especially important for RAW files. As I own a Canon camera, I have most of my photos in CR2 format, so I need to be sure that image/x-canon-cr2 is in the list. I have also specified CRW, NEF and PEF mime types for older Canon, Nikon and Pentax cameras, respectively. These mime types are already registered in Ubuntu Intrepid (not sure about other distributions). Here is some info on how to register new mime types in Gnome, in case your camera's format is not registered yet.



After chosing the Open With->LightZone, F-Spot will ask whether to create a new version for the file. Select 'No' - this won't work with RAW files and LightZone anyway, because LightZone saves files automatically with the _lzn.jpg suffix and F-Spot doesn't know about it.

Getting this right requires patching F-Spot (I am going to do that later). For now, you will have to import the saved file manually, if you want it to appear in F-Spot.


Monday, November 24, 2008

Enable HTTP proxy in Gnome automatically

I have a laptop with Linux (currently Ubuntu) which I use both at home and at work. The corporate security policy requires everyone to use the HTTP proxy server with authentication for web access, so when I come to work I had to manually enable it, and then disable again at home - not very convenient.

As a side note, Firefox 3+ is great in respecting the global or system-wide proxy configuration (System->Preferences->Network proxy or gnome-network-preferences) as well as gnome-terminal is very nice to set the http_proxy environment variable automatically when proxy is configured, making most command-line tools respect the global proxy setting as well, which is very cool.

So, before network profiles have arrived to Gnome or NetworkManager (I have seen some related commits in Gnome SVN), I still want to enable the proxy automatically depending on my location. Thankfully, NetworkManager supports execution of scripts when it brings interfaces up or down, so this is not difficult at all.

At least on Ubuntu, NetworkManager executes the scripts that are located in /etc/NetworkManager/dispatcher.d/ when it brings interfaces up. Inside of the script I can detect whether I am at work by checking the domain name in /etc/resolv.conf provided by the corporate DHCP server, or the beginning of the assigned IP address if domain can't be used for any reason.

OK, here is the working script for Ubuntu Karmic, Jaunty and Intrepid (Gnome 2.24+), see notes below for older versions. I have this script in /etc/NetworkManager/dispatcher.d/02proxy, because 01ifupdown already exists there.

It is an updated version, attempting to make the script suitable for more general use, eg in our company we now provide it in a .deb package for all Ubuntu-based laptops.

#!/bin/bash
# The script for automatically setting the proxy server depending on location.
# Put it under /etc/NetworkManager/dispatcher.d/02proxy
# Create also the /etc/NetworkManager/proxy_domains.conf, specifying the mapping of
# DHCP domains to proxy server addresses, eg "example.com proxy.example.com:3128"
# Written by Anton Keks

PROXY_DOMAINS="/etc/NetworkManager/proxy_domains.conf"

# provided by NetworkManager
INTERFACE=$1
COMMAND=$2

function gconf() {
sudo -E -u $USER gconftool-2 "$@"
}

function saveUserConfFile() {
echo "DOMAIN_USER=$DOMAIN_USER" > $CONF_FILE;
echo "DOMAIN_PWD_BASE64="`echo $DOMAIN_PWD | base64` >> $CONF_FILE;
echo "PROXY_HOST=$PROXY_HOST" >> $CONF_FILE;
echo "PROXY_PORT=$PROXY_PORT" >> $CONF_FILE;
}

function enableProxy() {
PROXY_HOST=`cat $PROXY_DOMAINS | grep $DOMAIN | sed 's/.* \+//' | sed 's/:.*//'`
PROXY_PORT=`cat $PROXY_DOMAINS | grep $DOMAIN | sed 's/.*://'`

# check if authentication is required
http_proxy=http://$PROXY_HOST:$PROXY_PORT/ wget com 2>&1 | grep "ERROR 407"
if [ $? -eq 0 ]; then
AUTH_REQUIRED="true"
CONF_FILE=$HOME/.proxy:$DOMAIN

if [ ! -e $CONF_FILE ]; then
DOMAIN_USER=`sudo -E -u $USER zenity --entry --text "Login name for domain $DOMAIN"`
DOMAIN_PWD=`sudo -E -u $USER zenity --entry --text "Password for domain $DOMAIN" --hide-text`
saveUserConfFile
fi

# load user proxy settings
. $CONF_FILE
# decode password
DOMAIN_PWD=`echo $DOMAIN_PWD_BASE64 | base64 -d`

# get Kerberos ticket (if it's configured)
if echo $DOMAIN_PWD | sudo -E -u $USER kinit $DOMAIN_USER; then
KLIST_INFO=`sudo -E -u $USER klist | fgrep Default`
sudo -E -u $USER notify-send -i gtk-info "Domain login" "Kerberos ticket retrieved successfully: $KLIST_INFO"
fi
else
AUTH_REQUIRED="false"
fi

# setup proxy
gconf --type string --set /system/proxy/mode "manual"
gconf --type bool --set /system/http_proxy/use_http_proxy "true"
gconf --type string --set /system/http_proxy/host $PROXY_HOST
gconf --type int --set /system/http_proxy/port $PROXY_PORT
gconf --type bool --set /system/http_proxy/use_same_proxy "true"
gconf --type bool --set /system/http_proxy/use_authentication $AUTH_REQUIRED
gconf --type string --set /system/http_proxy/authentication_user $DOMAIN_USER
gconf --type string --set /system/http_proxy/authentication_password $DOMAIN_PWD

# notify
sudo -E -u $USER notify-send -i gtk-info "Proxy configuration" "Your proxy settings have been set to: $DOMAIN_USER@$PROXY_HOST:$PROXY_PORT"
}

function disableProxy() {
gconf --type string --set /system/proxy/mode "none"
gconf --type bool --set /system/http_proxy/use_http_proxy "false"
gconf --type string --set /system/http_proxy/host ""
gconf --type bool --set /system/http_proxy/use_authentication "false"
gconf --type string --set /system/http_proxy/authentication_user ""
gconf --type string --set /system/http_proxy/authentication_password ""
}

# wait for gnome-settings-daemon to appear, ie until user logs in
for i in {1..100}; do
if [ ! `pidof gnome-settings-daemon` ]; then
sleep 5;
echo "Waiting for gnome-settings-daemon to appear..."
else
break
fi
done
if [ ! `pidof gnome-settings-daemon` ]; then
echo "gnome-settings-daemon is not running. exiting."
exit 1
fi

# steal environment from the current non-root user
XENV=`xargs -n 1 -0 echo </proc/$(pidof gnome-settings-daemon)/environ`
# init DBUS connection string in order to reach gconfd
eval export `echo "$XENV" | fgrep DBUS_SESSION_BUS_ADDRESS=`
eval export `echo "$XENV" | fgrep USER=`
eval export `echo "$XENV" | fgrep HOME=`
eval export `echo "$XENV" | fgrep DISPLAY=`
eval export `echo "$XENV" | fgrep XAUTHORITY=`

if [ $COMMAND != 'up' ]; then
disableProxy;
exit
fi

DOMAIN=`cat /etc/resolv.conf | grep domain | sed 's/domain \+//'`
# check if we need to set proxy settings for this domain
if [[ -e $PROXY_DOMAINS && ! `cat $PROXY_DOMAINS | grep $DOMAIN` ]]; then
echo "Proxy is not required for domain $DOMAIN"
disableProxy
else
echo "Setting proxy for domain $DOMAIN"
enableProxy
fi

Don't forget to:
  • give this script execute permissions
  • have gconftool-2, zenity and kinit installed (gconf2, zenity, krb5-user packages in Ubuntu). Install gconf-editor as well for a graphical config editor.
  • create /etc/NetworkManager/proxy_domains.conf, specifying the mapping of DHCP domains to proxy server addresses, eg "example.com proxy.example.com:3128". Specify each domain on a new line.
The script doesn't need you to hardcode your username and the proxy password anymore - the script will ask you for these values on first run and then store them in $HOME/.proxy:$DOMAIN file, so the script is now perfectly usable on multiuser machines and doens't bug you in case of 'unknown' domains.

For more functionality, it even tries to retrieve the Kerberos ticket for you, if the kerberos is configured properly in /etc/krb5.conf. You can check if this is the case by running this on the command-line:
kinit your-user-name; klist
This works very well for me and saves several mouse clicks every morning :-)

Note to Gnome 2.22 and older users (Ubuntu Hardy, etc): I had this script initially done in Hardy, but after upgrading to Intrepid (Gnome 2.24) it stopped working. The reason was that starting from Gnome 2.24, the gconf setting of /system/http_proxy/use_http_proxy is not the primary one and has been replaced by /system/proxy/mode, which takes one of three values: 'auto', 'manual' and 'none'. In Intrepid, if you set only /system/http_proxy/use_http_proxy as before - it has no effect, you need to set /system/proxy/mode to manual, and this will set the value of the old setting to 'true' automatically.

Another thing introduced with Intrepid is the need to set the DBUS_SESSION_BUS_ADDRESS environment variable (the script steals it from the x-session-manager process) - this is because gconfd has switched to DBUS from CORBA for a communication protocol. If you have older Gnome, then you may omit these 2 lines involving DBUS.

Enjoy!