28 December 2007

Preparing photos for a digital picture frame

Challenge of the day was to fit as many photos as possible on a single flash card to stick in a digital photo frame. Here's how it's done.

The frame from Philips goes by the memorable name of 9FF2M4 , and by way of a quick review it is very nice. If I were a normal person, I would probably have copied the original 2.5MB / 5 megapixel images to the frame's flash card (1GB Compact Flash in this case, though it can take others), and put up with not being able to fit all the photos on, and having some of them show sideways. But being a perfectionist I instead sacrificed precious sleeping time to figure out what to do. In the end I managed to trim the files down to around 200KB each, and put portrait photos on a black background the right way up in order to save neck ache from squinting at a sideways eiffel tower. This was all done by the power of OSS and bash scripting. Here I present for your convenience the methods I used, and highlight some of the useful things I picked up along the way.

The first thing that taxed me was what size the photos needed to be to display best whilst taking up minimal space. You would think the answer would be emblazened on the product's box, but no! Philips don't seem to be too keen on promoting the resolution of the display, and even the shop keeper struggled to give me a number. The owner's manual states: "Resolution: 800 x 480 pixels (viewing area 680 x 480)" but after some time experimenting with test images created with the gimp I came to the conclusion that it was impossible to get the frame to display an image pixel perfect as it seemed to be re-scaling every picture regardless of original size. There appears to be no guidance from Philips as to what a good resolution for the photos would be, so after some experimentation I settled on 800x600 as this is slightly higher than the frame's native resolution, and fills the screen nicely without loosing too much off the edges when displayed.

The frame does not appear to read orientation from the exif data so I looked into rotating all the portrait images to display correctly. I am using the frame in its landscape orientation as that is the form of most of the photos, even though it can be placed in portrait orientation. When a portrait photos is displayed (eg 480x600), the frame puts a fair amount of the image off the top and bottom of the display, and by default puts it on a full white background which is a little hard on the eyes and detracts from darker photos. I therefore opted to create landscape images of 800x600 with a black background for all the portrait photos. I later discovered that you can on this frame change the background colour as follows: Main menu > Slideshow > Background colour > White / Black Grey.

The process I have used is a little specific to my setup and needs, but hopefully will give you a good starting point. I have created 3 bash scripts that call each other to orchestrate the conversion from my raw photo collection to a new set suitable for the frame, which in turn make use of imageMagick and exiftran to do the work.

I found out about imageMagick through searching, and tutorials such as HowTo - Batch Image Resize on Linux. The version packaged with Ubuntu 7.10 is quite old, so I ended up building and installing the latest version (6.3.7) from source to get all the functionality I needed.

exiftran is a nifty utility that reads the exif orientation information in a photo, losslessly rotates the photo to match and then updates the exif data. It is closely related to jpegtran.

My folder structure in my home folder (so the scripts make sense):
  • scripts (for bash scripts)
  • photos (originals)
    • 2005
      • 2005-12-31 event name
      • etc
    • 2006
    • etc
  • photos_frame (for the modified and shrunk photos which will be copied onto the flash card)
So without further ado, here's the scripts:

frame.sh - runs the processing scripts on each year folder of interest
#!/bin/bash -v
~/scripts/frame_photo_folder.sh 2005 ~/photos_frame/
~/scripts/frame_photo_folder.sh 2006 ~/photos_frame/
~/scripts/frame_photo_folder.sh 2007 ~/photos_frame/


frame_photo_folder.sh - runs the processing script on subfolder of the year
#!/bin/bash
#arg 1 = input folder
#arg 2 = output folder

INPUTPATH=$1
OUTPATH=$2
cd $INPUTPATH
if [ ! -d "$OUTPATH$INPUTPATH" ]
then
echo creating output folder \"$OUTPATH$INPUTPATH\"
mkdir $OUTPATH$INPUTPATH
fi
for fname in *
do
if [ -d "$fname" ]
then
if [ ! -d "$OUTPATH$INPUTPATH/$fname" ]
then
echo creating output folder \"$OUTPATH$INPUTPATH/$fname\"
mkdir "$OUTPATH$INPUTPATH/$fname"
fi
echo searching for jpg files in \"$fname\"
cd "$fname"
find . -maxdepth 1 -type f -name \*.JPG | xargs -iimgfile ~/scripts/frame_photo.sh "imgfile" "$OUTPATH$INPUTPATH/$fname"
cd ..
fi
done

frame_photo.sh
  • creates output folder(s)
  • copies original photo into output folder
  • uses exiftran to rotate the photo to the correct orientation
  • shrinks the photo to a maximum of 800x600, and fills any remaining space with a black background
#!/bin/bash

#arg 1 = photo file name
#arg 2 = where to put result
#resizes and pads suitable for a photo frame.

INPUTFILE=$1
OUTPATH=$2
#pwd
echo copying \"$INPUTFILE\" into \"$OUTPATH\"
cp "$INPUTFILE" "$OUTPATH"
cd "$OUTPATH"
#pwd
#echo processing \"$INPUTFILE\"
exiftran -ai "$INPUTFILE"
convert "$INPUTFILE" -resize '800x600>' -background black -gravity center -extent 800x600 "$INPUTFILE"


I timed the whole operation using the time command, and copied all output to a log file as follows.

$ time ./frame.sh 2>&1 | tee frame.log

The conversion of around 6000 photos took around one and a half hours.

The concept of redirection of stdout & stderr was neatly explained by the article CLI magic: need redirection?, so now I know that 2>&1 means redirect ouput number two into output number one, in other words redirect stderr into stdout, which then alows you to pipe the whole lot into something else like "tee" (No, not tea, though it may be interesting redirecting my photos into my tea...)

Add a comment or drop me a line if you find it interesting or useful or if you have any questions or criticisms.

Update:
I've worked this script into a small python gui app, check it out at http://github.com/timabell/photo-frame-prep

09 December 2007

Enabling TV-Out on Ubuntu Linux 7.10 on a Dell Inspiron 8500

This weekend, I finally got the tv-out working under linux (Ubuntu 7.10 aka gusty gibbon) on my laptop. Here's what was involved, including some of the (time consuming) red herrings involved in getting this set up.

I've included the full xorg.conf files for normal display and tv output at the end of this post.

I used the composite video output as that's what I have cables for. I haven't ever tried the s-video output, and I haven't tried the digital audio output since I divorced microsoft windows and threw her things out into the rain a couple of years ago.

The quality is pretty poor, but good enough. I think there's a limit of 800x600 for the video out. I'm getting a fair amount of interference on both the video and audio when the laptop is on mains and connected to my amplifier / tv. I'm not sure what the cause it but it's not bad enough to be unusable.

I installed the nvidia proprietary (that's a negative word in case you don't live in my world) drivers some time ago in order to get 3D acceleration, and I think this is a prerequisite to running the tv-out.

In my intial investigation I came across nvoption, which in theory allows you to turn on the tv-out on the nvidia cards. I did manage to compile and run it after several hours of trial, error and finding build dependencies but when I finally got it built and running I found that it would seg fault when I hit the "apply" button, hurrah! In the process of playing with nvoption however, I noticed the nv-online page that this person has very generously set up. Reading this it dawned on me that nvoption purely modifies the /etc/X11/xorg.conf file, and that I don't actually need the tool to get tv-out running. I had originally presumed (the brother of all ...) that the nvoption tool did some magical proprietary prodding of the graphics card directly. After a bit of searching to find out where the options should go (the device section), I was then able to use the documentation of options in the second frame of the nv-online page to configure my own X. After a bit of experimenting with different options and lots of restarting of the X server (ctrl+alt+backspace) I was able to get the desired result of the display mirrored/cloned on both the lcd and the television.

I tried the nvidia-settings gui tool that comes with the proprietary drivers, but it was no use for this task. This tool modifies the xorg.conf file. It did help me recently with a normal dual screen setup (using a crt monitor plugged into the vga port on the laptop), but it was no help for the tv-out, which was not even mentioned in the interface.

There is a tool called displayconfig-gtk which is fairly new to Ubuntu that allows you to save named display profiles for different configurations (including dual screen, though it didn't quite behave for me). It can be found under System > Administration > Screens and Graphics. This stores an xorg.conf file for each profile in /var/lib/displayconfig-gtk/locations/, and an index file in /var/lib/displayconfig-gtk/locations.conf. This is almost ideal, as I have created a set of xorg.conf files for my various setups, however it doesn't seem to cope with applying these custom xorg files. Additionally nvidia seem to have a weird way of setting the screen to run at its native resolution of 1920x1600, and this tool doesn't cope with it. This was corrected by selecting the right resolution under System > Preferences > Screen Resolution.

Sadly it looks like there are no tools for easy switching X configuration files, so the process for now is involves manually copying the config files. I've created multiple files in /etc/X11, one for each set up including xorg.conf_lcd and xorg.conf_tv. The switching process is then something along the lines of "cd /etc/X11/", "sudo cp xorg.conf_tv xorg.conf", ctrl+alt+backspace (restart x server).

If it's any consolation I recall the process in windows involved starting from scratch in a distinctly non-intuitive gui and trying to get a whole load of settings just right, so being able to save the settings is a big step up. I think it took similar amounts of time to get tv-out running under windoze. I guess that's the price we pay for allowing companies to deny us access to the hardware specs so it can be integrated properly. I bought this laptop before I knew how much control I was giving away, and I endeavour not to make such mistakes these days.

The "designed for windows xp" sticker has been moved to the equally shiny microwave oven which brings me a small piece of joy when I make porridge in the morning.



xorg.conf for just the laptop screen
# nvidia-settings: X configuration file generated by nvidia-settings
# nvidia-settings: version 1.0 (buildmeister@builder3) Mon Apr 16 20:38:05 PDT 2007

Section "ServerLayout"
Identifier "Layout0"
Screen 0 "Screen0" 0 0
InputDevice "Keyboard0" "CoreKeyboard"
InputDevice "Mouse0" "CorePointer"
Inputdevice "Synaptics Touchpad"
EndSection

Section "Files"
RgbPath "/usr/X11R6/lib/X11/rgb"
EndSection

Section "Module"
Load "dbe"
Load "extmod"
Load "type1"
Load "freetype"
Load "glx"
EndSection

Section "ServerFlags"
Option "Xinerama" "0"
EndSection

Section "InputDevice"
# generated from default
Identifier "Mouse0"
Driver "mouse"
Option "Protocol" "auto"
Option "Device" "/dev/psaux"
Option "Emulate3Buttons" "no"
Option "ZAxisMapping" "4 5"
EndSection

Section "InputDevice"
Identifier "Synaptics Touchpad"
Driver "synaptics"
Option "SendCoreEvents" "true"
Option "Device" "/dev/psaux"
Option "Protocol" "auto-dev"
Option "HorizScrollDelta" "0"
EndSection

Section "InputDevice"
# generated from default
Identifier "Keyboard0"
Driver "kbd"
EndSection

Section "Monitor"
# HorizSync source: edid, VertRefresh source: edid
Identifier "Monitor0"
VendorName "Unknown"
ModelName "Sharp"
HorizSync 30.0 - 75.0
VertRefresh 60.0
Option "DPMS"
EndSection

Section "Device"
Identifier "Videocard0"
Driver "nvidia"
VendorName "NVIDIA Corporation"
BoardName "GeForce4 4200 Go"
EndSection

Section "Screen"
Identifier "Screen0"
Device "Videocard0"
Monitor "Monitor0"
DefaultDepth 24
Option "metamodes" "DFP: nvidia-auto-select +0+0"
SubSection "Display"
Depth 24
Modes "1600x1200" "1280x1024" "1024x768" "800x600" "640x480"
EndSubSection
EndSection




xorg.conf for running the tv-out at 800x600, with the laptop displaying the same
# nvidia-settings: X configuration file generated by nvidia-settings
# nvidia-settings: version 1.0 (buildmeister@builder3) Mon Apr 16 20:38:05 PDT 2007

Section "ServerLayout"
Identifier "Layout0"
Screen 0 "Screen0" 0 0
InputDevice "Keyboard0" "CoreKeyboard"
InputDevice "Mouse0" "CorePointer"
Inputdevice "Synaptics Touchpad"
EndSection

Section "Files"
RgbPath "/usr/X11R6/lib/X11/rgb"
EndSection

Section "Module"
Load "dbe"
Load "extmod"
Load "type1"
Load "freetype"
Load "glx"
EndSection

Section "ServerFlags"
Option "Xinerama" "0"
EndSection

Section "InputDevice"
# generated from default
Identifier "Mouse0"
Driver "mouse"
Option "Protocol" "auto"
Option "Device" "/dev/psaux"
Option "Emulate3Buttons" "no"
Option "ZAxisMapping" "4 5"
EndSection

Section "InputDevice"
Identifier "Synaptics Touchpad"
Driver "synaptics"
Option "SendCoreEvents" "true"
Option "Device" "/dev/psaux"
Option "Protocol" "auto-dev"
Option "HorizScrollDelta" "0"
EndSection

Section "InputDevice"
# generated from default
Identifier "Keyboard0"
Driver "kbd"
EndSection

Section "Monitor"
# HorizSync source: edid, VertRefresh source: edid
Identifier "Monitor0"
VendorName "Unknown"
ModelName "Sharp"
HorizSync 30.0 - 75.0
VertRefresh 60.0
Option "DPMS"
EndSection

Section "Device"
Identifier "Videocard0"
Driver "nvidia"
VendorName "NVIDIA Corporation"
BoardName "GeForce4 4200 Go"
Option "TwinView" "1"
Option "TwinViewOrientation" "Clone"
Option "MetaModes" "800x600, 800x600;"
Option "TVStandard" "PAL-I"
Option "ConnectedMonitor" "DFP,TV"

EndSection

Section "Screen"
Identifier "Screen0"
Device "Videocard0"
Monitor "Monitor0"
DefaultDepth 24
#Option "metamodes" "DFP: nvidia-auto-select +0+0"
SubSection "Display"
Depth 24
Modes "1600x1200" "1280x1024" "1024x768" "800x600" "640x480"
EndSubSection
EndSection

04 November 2007

Making money with free software

Business model #1!

"Turning capital into code"

I believe there is a viable business to be had running a software company in a novel fashion. Each project to create a new or improved piece of open source software would be funded by multiple contracts with businesses who have a need for the software. The software business would be run as a partnership ala John Lewis, where the employees own the company.

You can stop reading now, that was it. Move a long, no more useful thoughts here.

Still here? Okay, if you insist I'll explain myself. I didn't have time to write you a short blog post, so I wrote you a long one.

I'm always interested in how business is done. And the more I see, the more I start to realize that business doesn't have to involve exploiting people (wanna buy a ringtone?), it is in fact more usually an agreement where both parties benefit.

Sure I could make a motorcar myself, but by the time I've finished tightening all the bolts in the mechano set it will probably have cost me fifty grand to make and do naught to sixty in pieces. Much better to buy a production car for fifteen grand and be happy that the manufacturer, sales person, mechanics etc take a percentage of my money as, shock horror (communists look away now) ... profit! Besides didn't my money come from charging someone enough to make a profit anyway?...

So, having alluded to the fact that I live in a capitalist society, I shall tell you what this has to do with open source / free software in my mind.

By trade and by choice I'm a software developer, turning my hand to whatever variant of software is needed at the time, and not afraid to set up systems and databases if the need arises.

I have for some time now been of the opinion that the open source method of development and distribution is superior to the proprietary (closed source) model, both as a user and as a developer. As a user, I find that community driven software generally does the things that are important well, and is significantly better supported due to the community that gathers around it. As a developer (though I have yet to run an open source project), I think that the direct, unfiltered feedback from users is better, the collaboration between developers is more effective, and there isn't a pressure to deviate from the correct solution to a problem into making pretty things that don't work. Having said all that, all software is written by humans, so some projects fair better than others. There are limited resources on both sides of the fence so each has its own areas of strength and weakness in terms of quality and completeness.

Okay, so I like open source software (OSS). So what?

impressive old glass roofWell, you or I can make money with proprietary software by creating something desirable, then charging people for using it ala Fog Creek. Which fits nicely in the capitalist world, and you can run your business as anything from evil empire (mention no names) to tree hugging near-as-damn-it open source and still turn a healthy profit. But that's never sat right with me because although I see the logic, you aren't really charging for the work you do, you are charging for an artificial restriction on redistribution. In other words I get enough investment together (time or money) to create the software in the first place, and get paid nothing for it. Then I sell it with virtually no overhead and hope to make up my initial investment, followed by a healthy profit. You can then use the healthy profit to improve the software, fund new software or attempt crush/buy the competition (maniacal laugh). Sure it works, but what if you could give your software away to the world once it has paid for itself? Wouldn't that make the world a better place? Well, that probably depends if you are creating restrictive crapware, but either way the way the answer is likely to be "Are you nuts?!! Throw away all that free profit?!".

There is another business model around that is working very nicely for some people and companies. The "you want it, you pay for creating it" approach. This comes in two flavours. Very common is hiring a contractor / employee to create exactly what you desire even if you have no idea how to create software. This can be used to give a business an edge over its competition, or even to allow it to do business at all. This is a very expensive way of creating software (for that company) but is sometimes the only way. Some open source companies are expert at negotiating contracts to create custom software for profit, that software may then join the world's supply of open source software.

Both of the above models fund a fair amount of the open source software available today, but I would like to see more money going into the creation of open source software, as it directly enriches all the people on this planet (money being simply a representation of all available resources, and open source being one of those resources).

It strikes me that there is a gap in between the two methods of funding software outlined above. On the one hand you take a punt that lots of people need something, create it and charge for the result, and on the other hand you find one person / company that needs something and charge them a handsome sum to create it (after which you can do as you please with the result, contract permitting). Given an imaginary need such as "software for tracking individual lumps of coal", what if there are not enough companies who want to track their coal in detail to make it worth taking a punt and using the packaged software model, but equally no individual company can justify paying the extortionate rates us technical mumbo jumbo types charge for such things? Well it just won't get created, and they'll just have to carry on using illegal immigrants to write on their coal with chalk.

At risk of getting to the point, we might find with the coal tracking software scenario that if we were to divide the cost of creating a coal tracker system (CTS) by the number of companies in need of such a system, then it would be a viable project. So maybe we could approach all these companies with a proposal to create a CTS (we love acronyms in IT, we even pretend to know what they mean), and get an agreement that they will jointly fund it. Okay so some won't go with it but the more that do the lower the cost per client.

What has this to do with open source? I figure that seeing as open source is a better development model anyway, it makes sense to run such a project as an open source project. This way, all clients get full control and visibility over the software they have paid for, and can benefit from others' additions. The world then becomes a slightly richer place for everyone (unless it's crapware!). This plan has the added bonus that it panders to my highly unprofitable desire not to charge for things that don't involve effort or materials.

There are many ways to approach getting contracts for such work (all of them hard no doubt). One could start with a single contract for the full price of the work, with the possibility of paying a lower price if more clients come on board. ie, contract one for 100% of the cost, but reduced to 60% if a second contract is found, with the second contract also paying 60%, giving a 120% income, continuing on this theme as more clients are found. Going for slightly higher than an equal division of costs gives an incentive to find more contracts and reduce the price for the original client. It might prove easier to start with say four contracts that state work and billing are dependent on finding four clients. I don't imagine for a minute this would be easy to swing, even with some of the more open minded companies out there, but I reckon all the worthwhile things in life require some effort (like flying helicopters of any size, for example). Perhaps with the right marketing spin it could happen, and the once it is proved to work, it might get easier to win the next contract.

So if this is a successful business in the waiting, why would you want run the business as a partnership and share all that juicy profit with mere employees?! Well the inspiration for this one comes from my favourite software business writer Joel Spolsky and his explanation in his article "Converting Capital Into Software That Works" that software companies' most valuable assets are the programmers. The long and short is that if programmers have financial buy in to the company they will care about the company, and if you are successful then the programmers will stick with you for the money. In fact I would go as far as taking the mission statement for such a business straight from the article, "turning capital into code".

I have one last point to cover before you turn off your set, tuck the cat up in bed, post to twitter what it just did and turn out the lights.

If this is such a killer business model, why on earth would I post the intellectual property (don't use this term, or the FSF will get you), er I mean information for some smart banana to beat me to it? Simple. Successful business is one percent inspiration, ninety-nine percent perspiration, and it is the successful execution of an idea that makes or breaks a business, much more so than the original idea. In fact I'm sure many businesses start with one idea, then discover the harsh realities of commerce and up doing something else anyway. And if someone does beat me to it, it will make the software world a richer place, so I in say earnest good luck to you.

The other side to releasing such information, is that I've seen what a positive effect openness has on the business of writing software, and I can see the potential positive effects for business run in an equally open fashion. Sure there are a few downsides, such as visibility for the competition, but I think as a rule they are massively overstated, and completely outweighed by the buy in and good will that you could gain from customers and potential customers from being completely open with them.

I'd love to hear of any ventures in this field, and any feedback, constructive or destructive is always welcome.

Thanks as always for your time. All the best from your host Tim Abell.

xsession sold out

My web host xsession has been bought by namesco, and promptly put domain renewal prices up from £8 to £17. Time for a new web host.

27 October 2007

OSS Contribution Number One!

I've had my first ever patch to an open source project accepted! Yay! (Fanfare please... no? anyone? oh well.)

Ok it's a one word change to a piece of documentation, but hey it's still something.

The project is gnucash. For details take a look at the bug report I created to hold the patch. I even got a thanks :-).

03 September 2007

Creating a blogroll

Update 11th Sep 2007:
xsession responded to my support request, and the opml file is now served, complete with the correct mime type.


Update 26th Dec 2009
Now on a linux host so no mime type issues now.

Podcast list added: podcasts.opml.

Now styled with custom xslt file opml.xsl.


As people may want to see my rss and podcast subscriptions, I have created a blogroll for you.

I've started with an OPML file, created by hand and uploaded to my web host. Unfortunately my web host won't (currently) serve the ".opml" file extension so I've had to use .txt.

so http://www.timwise.co.uk/blogroll.opml became http://www.timwise.co.uk/blogroll.opml.txt

I then validated the file with http://validator.opml.org/
I then added my feed to http://share.opml.org/ so you can now see the list at http://share.opml.org/viewsharedfeeds/?user_id=7189
In the opml I've separated podcasts and news feeds, but share.opml doesn't use this info.

There is some controversy over opml, but hell, it does the job. We can all upgrade when a better alternative goes mainstream.

Useful references:
http://www.kbcafe.com/rss/?guid=20051003145153
http://www.rss-tools.com/opml-generators.htm
http://www.bioneural.net/2005/10/09/iblog-opml-bloglines-reading-list/
http://nayyeri.net/archive/2007/02/17/create-a-blogroll-from-opml-files.aspx

16 August 2007

Taking a Microsoft Learning course - my experience so far

I am currently studying for a Microsoft ASP.NET qualification
MCTS: .NET Framework 2.0 Web Applications.
I have paid for and begun one of the Microsoft Learning training courses:
Collection 5160: Core Development with the Microsoft® .NET Framework 2.0 Foundation.

Here are my first impressions. I have posted most of this content on the private boards of the course provider.

I've been pretty pleased with the course so far. The coverage of the material seems good, and I have already learnt quite a few extra things even though I have been using ASP.NET in earnest for a couple of years commercially. The mix of presentation of content works well for me, with the video presentations, factual content, puzzles, quizzes and final lab sessions combining well to reinforce the new material.

The ability to use a virtual machine at the end of each module, loaded with Visual Studio 2005 is essential for those without a copy, handy for those of us who are no longer trapped in Bill's world and a neat trick even if you do have an msdn subscription.

Most of my feedback is minor annoyances. In my experience any form of education is often imperfect, and this would appear to be on the good side of such things, though it remains to be seen if it gets me through the exams!

My comments:
- Firefox support could be improved (except for activex stuff of course). I couldn't get into the course at all in firefox. (I don't atually run windows at all at home, so had to fire up a vm just to get in, even though most of the content is no more than html, css & flash).
- Page width and font size seem to be linked, so if I increases the text size I have to scroll side to side, which is a PITA, as i run a super hi res screen so default text is tiny. Particularly noticeable on the lab exercise pages.
- Providing a zip of the starter and solution files would be much better for those of us who have visual studio installed locally.
- As someone else said, why do you have to log in to the lab machines? It's trivial to get windows to automatically log a user in.
- I had a connection drop on me (while reading up), it would have been good to be able to reconnect to same session.
- It would be good to see the remaining lab time in the same window as the rdp activex control.
- The keyboard in the VM is set to US, which is a PITA as I have a UK keyboard, so " comes out as @.
- It's not clear if the forum emails you if someone replies. I'm not likely to monitor it, but the answers I get will affect my decision to buy the rest of the courses I'm planning on doing.
- The forums seem to be lacking in input from course staff in helping struggling (paying) students, and in providing technical support.

02 July 2007

Blocking web adverts

A friend asked me to write this up.

To remove all those annoying adverts from the web as you see it:


Job done. Thanks for listening.

Ubuntu screen locking

Howto prevent ubuntu locking the screen when closing the laptop lid.

Thanks to jrib in irc://freenode.net/#ubuntu for this one.

  • Run gconf-editor (with alt+F2)

  • Go to or search for /desktop/gnome/lockdown

  • Tick disable_lock_screen

  • Restart gnome (ctrl+alt+backspace - after saving your documents it's a bit brutal!)

20 June 2007

backing up your home folder

Here I outline the solution I chose for backing up my life, er... I mean home folder. (I'm sure there's life outside /home/tim somewhere...)

My requirements were:
  • backup to dvd+rw

  • >20GB of data to back up

  • no obscure formats (in case I don't have the backup tool when I need to restore)


I looked at several solutions for backups but ended up writing scripts to meet my needs.

The main script creates a tar file of my home directory, excluding certain items, which is then split into files suitable for writing to dvd+rw discs, with tar based verification, md5sums and file list text files created at the same time.

The reason for splitting to 3 files per disc is that the iso 9660 spec has a 2GB file size limit, and it's important that the discs are as simple as possible (ie no UDF) to aid recovery in awkward situations. This is also why I avoided compression.

backup_home.sh
#!/bin/bash -v
#DVD+R SL capacity 4,700,372,992 bytes DVD, (see wikipedia on DVD)
#ISO max file size 2GB. 4.38GB/3 = 1,566,790,997bytes = 1,494MB
#1,490MB to leave some space for listings and checksums
tar -cvv --directory /home tim --exclude-from backup_home_exclude.txt | split -b 1490m - /var/backups/tim/home/home.tar.split.
cd /var/backups/tim/home
md5sum home.tar.split.* > home.md5
cat home.tar.split.* | tar -t > home_file_list.txt
cat home.tar.split.* | tar -d --directory /home tim > home_diff.txt
ls -l home.* > home_backup_files.txt


backup_home_exclude.txt
tim/work*
tim/.Trash*
tim/.thumbnails*


This leaves me with a big pile of split files (named .aa, .ab etc) and a few text files. I proceeded to write 3 split files per disc, and put the 4 text files on every disc for convenience. I used gnome's built in DVD writing to create the discs.

I also wanted to verify the md5 checksums as the discs were created, so I wrote another little script to make life easier. This ensures the newley written disc has been remounted properly, and runs the md5 check. So long as the 3 relevant checksums came out correctly on each disc I can be reasonably confident of recovering the data should I need it.
"eject -t" closes the cdrom, which is handy.

reload_and_verify.sh
#!/bin/bash -v
cd /media
eject
eject -t
mount /media/cdrom
cd cdrom
md5sum -c home.md5
cd /media
eject


In addition to the above mechanism (which is a pain at best, mostly due to media limitations) I keep my machines in sync with unison which I strongly recommend for both technical and non-technical users. I gather it also runs on microsoft (who?), so you might find it useful if you are mid transition.

28 May 2007

My mum and her super ceramics

Thrown Handled Vase
Thrown Handled Vase,
originally uploaded by Sarah Abell.
My mum has started posting pics of her fab ceramics work on flickr. Go mum!

One of her early pieces has pride of place in my display cabinet and is admired by all. Watch this space for more funky designs as she heads towards the end of uni.

27 May 2007

Get emailed Tim's blog and photos

Can't be bothered to check here and see if I've written anything this month? Great news! You can now have my blog entries and latest flickr photos sent to you by email thanks to the feedblitz service.

You can subscribe to my blog using the email box on the bottom of the right hand menu, or by clicking here.

You can subscribe to my flickr photo feed here (public photos only - you have to be my flickr contact to see photos of friends I post).

starfighter


Whilst looking for backup packages on ubuntu via synaptic by searching for the word "tar", I stumbled across the package "starfighter". So I installed it. Then ran it. And played it. It's rather good fun. Ctrl to fire lasers, space to fire rockets, cursors to get around. Spend your time chasing enemy ships around the screen, and collecting bonuses / money. Has a decent sound track too.

Highly recommended. I love the open source world.



$ apt-cache show starfighter
Package: starfighter
Priority: optional
Section: universe/games
Installed-Size: 368
Maintainer: Debian Games Team
Architecture: i386
Version: 1.1-6
Depends: libc6 (>= 2.4-1), libgcc1 (>= 1:4.1.0), libsdl-image1.2 (>= 1.2.3), lib
sdl-mixer1.2 (>= 1.2.6), libsdl1.2debian (>> 1.2.7+1.2.8), libstdc++6 (>= 4.1.0)
, starfighter-data (= 1.1-6)
Filename: pool/universe/s/starfighter/starfighter_1.1-6_i386.deb
Size: 116320
MD5sum: 959f894e78517a3411c3c2656d61b85c
SHA1: ac7e2f458d4bd8c57056e11bb3da8609f35b528c
SHA256: 29c9adee1ee2fb52f1d790254683579e919655ad01bb806a02a59d32abcb8d58
Description: 2D scrolling shooter game
After decades of war one company, who had gained powerful supplying both
sides with weaponary, steps forwards and crushes both warring factions
in one swift movement. Using far superior weaponary and AI craft, the
company was completely unstoppable and now no one can stand in their
way. Thousands began to perish under the iron fist of the company. The
people cried out for a saviour, for someone to light this dark hour...
and someone did.
.
Features of the game:
.
o 26 missions over 4 star systems
o Primary and Secondary Weapons (including a laser cannon and a charge weapon)
o A weapon powerup system
o Wingmates
o Missions with Primary and Secondary Objectives
o A Variety of Missions (Protect, Destroy, etc)
o 13 different music tracks
o Boss battles
.
Homepage: http://www.parallelrealities.co.uk/starfighter.php
Bugs: mailto:ubuntu-users@lists.ubuntu.com
Origin: Ubuntu

10 April 2007

running partimage in batch mode

A continuation of the partimage project.

As it would appear that stdout support doesn't work due the user interface making use of stdout, I have been figuring out how to make the program run in batch mode, with a little help from KDevelop.

My continued findings:

The help presents a fully batch mode, -B
$ ./partimage --help
===============================================================================
Partition Image (http://www.partimage.org/) version 0.6.5_beta4 [stable]
---- distributed under the GPL 2 license (GNU General Public License) ----

Supported file systems:....Ext2/3, Reiser3, FAT16/32, HPFS, JFS, XFS,
UFS(beta), HFS(beta), NTFS(experimental)

usage: partimage [options] <action> <device> <image_file>
partimage <imginfo/restmbr> <image_file>

ex: partimage -z1 -o -d save /dev/hda12 /mnt/backup/redhat-6.2.partimg.gz
ex: partimage restore /dev/hda13 /mnt/backup/suse-6.4.partimg
ex: partimage restmbr /mnt/backup/debian-potato-2.2.partimg.bz2
ex: partimage -z1 -om save /dev/hda9 /mnt/backup/win95-osr2.partimg.gz
ex: partimage imginfo /mnt/backup/debian-potato-2.2.partimg.bz2
ex: partimage -a/dev/hda6#/mnt/partimg#vfat -V 700 save /dev/hda12 /mnt/partimg/redhat-6.2.partimg.gz

Arguments:
* <action>:
- save: save the partition datas in an image file
- restore: restore the partition from an image file
- restmbr: restore a MBR of the image file to an hard disk
- imginfo: show informations about the image file
* <device>: partition to save/restore (example: /dev/hda1)
* <image_file>: file where data will be read/written. Can be very big.
For restore, <image_file> can have the value 'stdin'. This allows
for providing image files through a pipe.

Options:
* -z, --compress (image file compression level):
-z0, --compress=0 don't compress: very fast but very big image file
-z1, --compress=1 compress using gzip: fast and small image file (default)
-z2, --compress=2 (compress using bzip2: very slow and very small image file):
* -c, --nocheck don't check the partition before saving
* -o, --overwrite overwrite the existing image file without confirmation
* -d, --nodesc don't ask any description for the image file
* -V, --volume (split image into multiple volumes files)
-VX, --volume=X create volumes with a size of X MB
* -w, --waitvol wait for a confirmation after each volume change
* -e, --erase erase empty blocks on restore with zero bytes
* -m, --allowmnt don't fail if the partition is mounted. Dangerous !
* -M, --nombr don't create a backup of the MBR (Mast Boot Record) in the image file
* -h, --help show help
* -v, --version show version
* -i, --compilinfo show compilation options used
* -f, --finish (action to do if finished successfully):
-f0, --finish=0 wait: don't make anything
-f1, --finish=1 halt (power off) the computer
-f2, --finish=2 reboot (restart the computer):
-f3, --finish=3 quit
* -b, --batch batch mode: the GUI won't wait for an user action
* -BX, --fully-batch=X batch mode without GUI, X is a challenge response string
* -y, --nosync don't synchronize the disks at the end of the operation (dangerous)
* -sX, --server=X give partimaged server's ip address
* -pX, --port=X give partimaged server's listening port
* -g, --debug=X set the debug level to X (default: 1):
* -n, --nossl disable SSL in network mode
* -S, --simulate simulation of restoration mode
* -aX, --automnt=X automatic mount with X options. Read the doc for more details
* -UX --username=X username to authenticate to server
* -PX --password=X password for authentication of user to server
===============================================================================


It is not immediately obvious what "X is a challenge response string" means.
I was able to get the program to run to a limited extend after a bit of searching the internet and trial and error with the option "-B x=y".

Having stepped through the program, it transpires that where I have put "x", the program expects a pattern to match with the title and content of any messages that would otherwise have been shown to the user, and "y" is the pre-programmed response. This is in the "interface_none" section.
"x" has to match the question in the form "message title/message content" and is compared using fnmatch which allows * as a wildcard (anyone got a good reference for fnmatch?).
If the program hits a question for the user, and cannot find a matching answer in the command arguments, "CInterfaceNone::invalid_programmed_response()" fires "exit(8)" and the program dies.

So far I have been running the program as a normal user, which will inevitably fail where it attempts to work with block devices / root owned files & folders. This produces a warning in the user interface, followed by program termination.

To bypass this first "not root" warning, I successfully used this pre-programmed answer:
./partimage -B Warning*=Continue
Alternatively the following is more specific and also works:
./partimage -B Warning*root*=continue

I haven't figured out how to pass more than one predefined answer in batch mode.

The run arguments can be set in KDevelop here:
project > options > debugger > program arguments

Side note:
The program has a base class of user interface defined, and then either instantiates interface_none or interface_newt depending on command line arguments.

If not using full batch mode it helps to set "enable separate terminal for application IO" in KDevelop (project > options > debugger) so that you can see the full user interface. However if the program exits then the console closes and any output is lost.

As part of stepping through the code, I came across a macro, which makes the program harder to follow while debugging due to not being able to step through. So I figured out what it did, and wrote out its output C++ code in full:

interface_none.cpp, line 103
#define MB_2(One,Other,ONE,OTHER)       \
int CInterfaceNone::msgBox##One##Other(char *title, char *text, ...) { \
char *result= lookup(title,text,"(unspecified)"); \
va_list al; \
va_start(al,text); \
message_only(#One "/" #Other, title, text, al, result); \
va_end(al); \
if (!strcasecmp(result,#One)) return MSGBOX_##ONE; \
if (!strcasecmp(result,#Other)) return MSGBOX_##OTHER; \
invalid_programmed_response(); \
return 0; \
}

MB_2(Continue,Cancel,CONTINUE,CANCEL)
MB_2(Yes,No,YES,NO)


my expanded version:
//notes: have expanded out macro so I can step through it.
int CInterfaceNone::msgBoxContinueCancel(char *title, char *text, ...) {
char *result= lookup(title,text,"(unspecified)");
va_list al;
va_start(al,text);
message_only("Continue" "/" "Cancel", title, text, al, result);
va_end(al);
if (!strcasecmp(result,"Continue")) return MSGBOX_CONTINUE;
if (!strcasecmp(result,"Cancel")) return MSGBOX_CANCEL;
invalid_programmed_response();
return 0;
}

int CInterfaceNone::msgBoxYesNo(char *title, char *text, ...) {
char *result= lookup(title,text,"(unspecified)");
va_list al;
va_start(al,text);
message_only("Yes" "/" "No", title, text, al, result);
va_end(al);
if (!strcasecmp(result,"Yes")) return MSGBOX_YES;
if (!strcasecmp(result,"No")) return MSGBOX_NO;
invalid_programmed_response();
return 0;
}


creating a ramdisk for testing.
http://www.vanemery.com/Linux/Ramdisk/ramdisk.html
(I am on ubuntu 6.10 here, details may vary)

$ ls -l /dev/ram*
brw-rw---- 1 root disk 1, 0 2007-04-08 20:10 /dev/ram0
brw-rw---- 1 root disk 1, 1 2007-04-08 20:10 /dev/ram1
brw-rw---- 1 root disk 1, 10 2007-04-08 20:10 /dev/ram10
brw-rw---- 1 root disk 1, 11 2007-04-08 20:10 /dev/ram11
brw-rw---- 1 root disk 1, 12 2007-04-08 20:10 /dev/ram12
brw-rw---- 1 root disk 1, 13 2007-04-08 20:10 /dev/ram13
brw-rw---- 1 root disk 1, 14 2007-04-08 20:10 /dev/ram14
brw-rw---- 1 root disk 1, 15 2007-04-08 20:10 /dev/ram15
brw-rw---- 1 root disk 1, 2 2007-04-08 20:10 /dev/ram2
brw-rw---- 1 root disk 1, 3 2007-04-08 20:10 /dev/ram3
brw-rw---- 1 root disk 1, 4 2007-04-08 20:10 /dev/ram4
brw-rw---- 1 root disk 1, 5 2007-04-08 20:10 /dev/ram5
brw-rw---- 1 root disk 1, 6 2007-04-08 20:10 /dev/ram6
brw-rw---- 1 root disk 1, 7 2007-04-08 20:10 /dev/ram7
brw-rw---- 1 root disk 1, 8 2007-04-08 20:10 /dev/ram8
brw-rw---- 1 root disk 1, 9 2007-04-08 20:10 /dev/ram9


create and mount test ramdisk
# mke2fs /dev/ram0
# mkdir /media/ram0
# mount /dev/ram0 /media/ram0

add a test file and unmount the disk
# echo "test data #1." >> /media/ram0/foo.txt
# umount /media/ram0


the above, as a script:
#!/bin/bash
# create and mount test ramdisk
mke2fs /dev/ram0
if [ ! -d /media/ram0 ]; then
mkdir /media/ram0
fi
mount /dev/ram0 /media/ram0
#add a test file and unmount the disk
echo "test file." >> /media/ram0/foo.txt
date >> /media/ram0/foo.txt
cat /media/ram0/foo.txt
umount /media/ram0


Create & run script (as root, because it (un)mounts a file system, and creates a dir in a root owned folder):
$ gedit mkram.sh
$ chmod ug+x mkram.sh
$ sudo ./mkram.sh


Wierdly, partimage won't run in full batch mode without a second part to the -B switch, even if it's set up to not need to ask any questions. Supplying a dummy "x=y" seems sufficient to fool it.

Runing as root without asking for partition description works:
$ sudo ./partimage -d -B x=y save /dev/ram0 ram0.img


Restore image to a different ramdisk and check file:
$ sudo ./partimage -B x=y restore /dev/ram1 ram0.img.000
$ sudo mount /dev/ram1 /media/ram1
$ cat /media/ram1/foo.txt
test file.
Mon Apr 9 12:56:59 BST 2007

Success!

Script for checking file in saved partition:
#!/bin/bash
# mount and check restored ramdisk
if [ ! -d /media/ram1 ]; then
mkdir /media/ram1
fi
mount /dev/ram1 /media/ram1
cat /media/ram1/foo.txt
umount /media/ram1


To debug in KDevelop as root (in ubuntu):
alt-F2 (run)
gksudo kdevelop
open project... (go find existing copy)

So in summary, I have made progress in understanding the ways of this useful utility, and am a step closer to making a useful contribution to the project.

The rambling nature of this post reflects the way in which one begins to understand a new program. Hopefully it's not too hard to follow, or pick out the useful pieces. All feedback gratefully appreciated.

Tim.

30 March 2007

bad geek joke: the bourne shell

Here's one I made earlier (19 Oct 2006 according to my pc):
bourneshell.png
It's modified from http://thebourneidentity.com/, which is incidentally a film I very much like.

For those who don't know, this is the bourne shell:
http://en.wikipedia.org/wiki/Bourne_shell
Which is the father of BASH (/bin/bash, the Bourne Again SHell)

partimage + stdout, existing code

On first inspection it looks like some code already exists for writing an image to stdout (standard output).

image_disk.cpp, line 558
  if (strcmp(m_szImageFilename, "stdout"))
{
//... network output code hidden for clarity ...
}
else // it's stdout
{
m_fImageFile = stdout;
showDebug(1, "image will be on stdout\n");
}

Unlike stdin for restore, stdout for save is not currently available in the command line options. I did do a build earlier where I enabled it (which I don't have any more due to my build problems). I managed to pipe an image to hexdump and seemed to be able to see some of the user interface info in the output.

It would seem the problem with the stdout option is that even in batch mode the program outputs interface data to stdout, which then corrupts the image.

I think I shall attempt to remove all the UI stuff, and make it act more like all the other unix tools. Might also try to create a reusable library out of it.

26 March 2007

compiling partimage

Had a problems getting partimage to compile on one of my pcs from a fresh checkout.
svn co https://partimage.svn.sourceforge.net/svnroot/partimage/trunk/partimage partimage

The ./autogen.sh script was failing as follows
tim@lap:~/projects/partimage$ ./autogen.sh
Running "autoreconf -vif" ...

autoreconf: Entering directory `.'
autoreconf: running: autopoint --force
autoreconf: running: aclocal -I m4 --output=aclocal.m4t
autoreconf: `aclocal.m4' is unchanged
autoreconf: configure.ac: tracing
autoreconf: running: libtoolize --copy --force
autoreconf: running: /usr/bin/autoconf --force
autoreconf: running: /usr/bin/autoheader --force
autoreconf: running: automake --add-missing --copy
configure.ac: 16: required file `./[config.h].in' not found
Makefile.am:1: AM_GNU_GETTEXT in `configure.ac' but `intl' not in SUBDIRS
automake: Makefile.am: AM_GNU_GETTEXT in `configure.ac' but `ALL_LINGUAS' not defined
autoreconf: automake failed with exit status: 1
Done.


Barked up lots of wrong trees, including looking for missing libraries, gettext config etc.

Turned out to be an old version of automake.

Not sure how my other pc ended up with the right version, but this pc's version was:
$ automake --version
automake (GNU automake) 1.4-p6


Installing new version (with some help from command line auto-completion):
$ apt-get install automake[tab]
automake automake1.5 automake1.8
automake1.4 automake1.6 automake1.9
automake1.4-doc automake1.7 automaken

$ sudo apt-get install automake1.9
...
$ automake --version
automake (GNU automake) 1.9.6


After updating automake, the ./autogen.sh script ran, and I could then run ./configure and make successfully, and was left with a binary for partimage in src/client/

Hurrah.

The solution came from a post by Tibor Simko on cdsware.cern.ch:

Re: problem with autoreconf when installing from cvs

* From: Tibor Simko
* Subject: Re: problem with autoreconf when installing from cvs
* Date: Thu, 18 Jan 2007 18:12:20 +0100

Hello:

On Thu, 18 Jan 2007, robert forkel wrote:

> $ autoreconf
> Makefile.am:23: AM_GNU_GETTEXT in `configure.ac' but `intl' not in SUBDIRS
> automake: Makefile.am: AM_GNU_GETTEXT in `configure.ac' but
> ALL_LINGUAS' not defined

Which version numbers of automake, autoconf, and gettext do you have?
E.g. automake versions prior to 1.9 and gettext versions prior to 0.14
will not work.

Best regards
--
Tibor Simko

16 March 2007

Multi-room music at home

me and slimserver

Today, I wanted to play music/radio in more than one room, and since BBC Radio 4 was playing The Archers, that ruled out the FM/Radio 4 simple option!

So, not liking to do anything the simple way, I set about searching for a way to broadcast sound to multiple rooms, preferably with a UDP/multicast type setup. Didn't manage that in the end, but have got something quite cool running.

Initially came across firefly media server, from an article [pdf] in linux magazine. Was put off by its absence from the ubuntu repositories.

I have a mate with a slimdevice, which is an awesome device. The server side of it is available free as it is OSS, and it is in the ubuntu repo (universe). So install was trivial:
sudo apt-get install slimserver

I could immediately connect with a web browser to http://localhost:9000/ and see the web interface (which is very good), and point any of my media players to http://localhost:9000/stream.mp3 and listen to the selected music. Nice. (Requires mp3 codec support to be installed. See easyubuntu)

Two things tripped me up connecting remotely. I had already spotted "Server Settings / Security / Allowed IP Addresses" and added my local subnet, but wasn't able to connect from another pc.
netstat showed that the server had only bound to the local ip address:
$ netstat -tln
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State
...
tcp 0 0 127.0.0.1:9000 0.0.0.0:* LISTEN
...


Through chance I knew about defaults files in ubuntu. Looking in /etc/defaults/slimserver, what do I find? Only bind to localhost. duh!
...
# This limits http access to the local host.
# Comment it out for network-wide access, or change
# to enable a single interface.
HTTP_ADDR=127.0.0.1
...

So, I commented out the http_addr line, and restarted the slimserver.
sudo /etc/init.d/slimserver restart
Slim server was now listening on *:9000

The other thing that tripped me up is that slimserver doesn't multicast, it maintains an independent stream & playlist for each connected device. So when I connected remotely I hadn't added music to the right playlist. In the web based interface there is a drop down list to select which device's playlist you want to modify. Once I figured that out it all worked. Yay. :-)

Didn't solve the original problem of playing the same audio simultaneously in multiple rooms, but it's cool nonetheless.

13 March 2007

Today's project - partimage enhancement

me and partimage

I recently reorganised the partitions on my laptop, with the help of some invaluable OSS tools.

The laptop was split into OS partitions, with a data and swap partition at the end, but I'd starting running out of space. I have since made ubuntu my only OS at home, so no longer require multiple partitions.
My partition table ended up looking something like this: data | OS | more data | swap, and I wanted it to look like this: OS & data | swap, but without having to rebuild (again).

With another linux box available with bags of disc space, I did something like the following:
  • from each data partition and my home folder: tar -cv datafolder | ssh otherbox "cat > laptop/datafolder.tar", which gave me a tarball of all my data
  • boot into knoppix 4
  • use partimage to save os parition image into filesystem of another partition
  • scp osimage.img otherbox:laptop/
  • fdisk to set up new partitions
  • pipe the image back into partimage across the wire: ssh otherbox "cat laptop/osimage.img" | partimage .... plus some flags for batch writing to new partition
  • use parted (partition editor) to stretch partition image back up to full size of new partition.
  • fix grub with help from knoppix - hda(0,2) to hda(0,0) or something.
  • remove references to non existent partitions from fstab
Which was all great, but I feel there's a feature missing from partimage. Although it can read an image from stdin for writing to disc, it can't write an image to stdout from disc. This would have saved me some thinking and some hassle. So in the true spirit of OSS, I shall have a go at adding the functionality.

So far, I have grabbed the source from sourceforge's svn server, managed to compile the source (after being confused by a misleading error message) and installed an IDE. I started with Eclipse, as I've been using it a bit recently and really like it, but figure that perhaps the C++ devs aren't likely to be java fans and maybe they would choose something else. So I've installed KDevelop, and will be having a go with that.