12 January 2015

LibreOffice mail merge - "data source 'SOURCE' was not found"

So another year on, LibreOffice (via Linux Mint 17.1) still has a dog's breakfast of a mail merge feature, hey ho, hopefully it might actually get fixed following the fork from OpenOffice and the change in contribution methods.


So I've moved machines, copied my files across and for some reason my mail merge has soiled itself and now bleats "data source 'SOURCE' was not found" which is as unhelpful as it is infuriating, especially given that the "check connections" button is exactly the wrong place to look for an answer.

Turns out you actually get this if even just a single field in your document is 'broken'. How do you tell which ones are broken? Well you have to change them all to just be sure. Sigh.

The fix for me today was as follows (though with such a messy feature there's unlimited ways it can break):

  1. Hit F4 and check that your connection to the spreadsheet actually exists and works, and unbreak anything you find therein. While you're in there you can marvel at how it requires a whole other file (.odb) just to remember how to get to a spreadsheet. (See below for fixing this)
  2. Turn on the field names so you can see what the f*** is actually going on with "View > Field Names (ctrl+f9)" which will show you the fully qualified field name, which might even be completely wrong. You can now see that for whatever reason (insanity?) it embeds more than just the field name at the field place-holder.
  3. And finally the way you actually fix the broken fields it's failing to tell you about actually lies under the menu item "Edit > Fields", where you can change all the broken references one at a time to the correct place.
  4. For bonus points, if it the field looks right but is silently broken somehow then you have to change the field to something else, hit okay, and then change it back again for anything to actually change, which is annoying if you have a lot of fields.
Fragile much?

Another fix I've just discovered is you can rename your data source to match the name defined in the fields (assuming they're all the same) and it'll start working again.

Fixing the .odb file

If you're stuck on point 1, here's how you fix it, also completely non-obvious and full of apparent dead-ends and dubious information.

  1. Give up on trying to do this in writer, it doesn't seem possible, in spite of false hope from the data sources tool, it only allows you to select .odb (database) files, not spreadsheets.
  2. Open up "libreoffice base", which pops open the database wizard
  3. Choose "connect to an existing database"
  4. In the dropdown choose "Spreadsheet"
  5. Next
  6. Browse for your spreadsheet
  7. Next
  8. Leave "register database for me" selected
  9. Leave "open the database for editing" checked
  10. Finish
  11. It prompts to save the new database (.odb), I suggest saving it in the same folder as the spreadsheet to save future confusion.
  12. You now have the database open in "base", you should see your spreadsheet sheets listed as tables
  13. Open a table (i.e. a sheet) and check you can see the spreadsheet contents
  14. Close "base", saving changes
  15. Return to your writer document
  16. Open the data sources again (F4), you should now be able to browse your spreadsheet via your newly created database.

Simpler than getting planning permission out of a vogon. :-/

Hope that helps some other poor open source die-hard who has work to do.

Useful refs:

10 May 2014

throw vs throw ex vs wrap and throw in c-sharp

I've come across the throw vs throw ex 'debate' a few times, even as an interview question, and it's always bugged me because it's never something I've worried about in my own c# code.


So here's a typical example of the throw vs throw ex thing: https://stackoverflow.com/questions/730250/is-there-a-difference-between-throw-and-throw-ex

Basically it revolves around either messing up the line numbers in your stack trace (throw ex;) or losing an chunk of your stack entirely (throw;) - exception1 and 2 respectively in this nice clear answer http://stackoverflow.com/a/776756/10245

the third option

I've just figured out why.

Because in my own code, whenever I catch and re-throw I always wrap another exception to add more context before rethrowing, and this means you don't have either of the above problems. For example:

private static void ThrowException3() {
try {
DivByZero(); // line 43
} catch (Exception ex) {
throw new Exception("doh", ex); // line 45

Exception 3:
System.Exception: doh ---> System.DivideByZeroException: Division by zero
  at puke.DivByZero () [0x00002] in /home/tim/repo/puker/puke.cs:51 
  at puke.ThrowException3 () [0x00000] in /home/tim/repo/puker/puke.cs:43 
  --- End of inner exception stack trace ---
  at puke.ThrowException3 () [0x0000b] in /home/tim/repo/puker/puke.cs:45 
  at puke.Main (System.String[] args) [0x00040] in /home/tim/repo/puker/puke.cs:18 

Obviously 'doh' would be something meaningful about the state of that function ThrowException3 in the real world.

Full example with output at https://gist.github.com/timabell/78610f588961bd0a0b95

This makes life much easier when tracking down bugs / state problems later on. Particularly if you string.format() the new message and add some useful state info.

08 March 2014

Why publish open source when you are commercial?

Why open source your commercial projects?
  • Forces you to decouple them from other internal systems.
  • Encourages thinking in terms of reusable modules, which is better for internal reuse just as much as public reuse.
  • Possibility of contributions to systems useful to your business by others.
  • Easier reuse within your organisation (the public internet is a better search and sharing system than any internal systems).
  • Reputation advantages, the best coders often like to work in open and forward-thinking companies, and having public shared code is a great sign of such an organisation.
Do it early
  • Preferably push your very first commit straight to github.
  • Do it before it has a chance to be tightly coupled to internal systems, otherwise you'll have to unpick it and it will be less decoupled from day one, and inertia might mean that in spite of the best intentions you then never publish it.
  • You'll have it in mind that every commit is public from day one, avoiding adding internal config etc and forcing you to factor it out into config which is all round a good thing.
  • Don't wait for your code to be perfect, there are compromises in all code and sharing something imperfect is better than sharing nothing.

Worried about the brand?
  • Commit under personal email addresses and push to personal github accounts. You can always setup a corporate github account later when you are feeling more confident.

Of course I'm not saying you should open source everything, for example your core product's codebase should probably not go on github if you are a product company!


Be brave, be open.
Props to Tom Loosemoore

10 February 2014

Bash command line editing cheat sheet

  • ctrl-a/e start/end of line
  • alt-f/b forward/back a word
  • ctrl-w/alt-d delete to start/end of word
  • ctrl-shift-_ undo (i.e. ctrl-underscore)
  • ctrl-y paste (yank) deleted text
    • alt-y paste older deleted text instead
  • prefix with alt+digit (0-9) to do multiple, e.g. delete two words
    • start with alt-minus to go backwards

Just a few notes I threw together for my own benefit. I finally got around to learning a bit more about editing commands on the Linux shell / terminal.

03 February 2014

Converting kml to gpx with python

Today I wanted to geo-code some of my photos.

I have an SLR digital camera (no gps of course), and an android phone. I recorded a track with My Tracks from google on the phone. (Not entirely recommended but works). I then fired up digikam to run the geo-correlation and add lat-long to the exif of the files only to discover digikam doesn't know how to read kml. Fooey.


I looked to gpsbabel, but it apparently can't handle this style of kml file, as differentiated by the coordinates being in the following style of markup:

<gx:coord>-1.885348 50.769434</gx:coord>
<gx:coord>-1.885193 50.769328 53.20000076293945</gx:coord>

So I wrote a python script to munge it into gpx shape:


This can be run as follows:

./kmlToGpx.py "25-01 12-48.kml" > "25-01 12-48.kml.gpx"

And worked a treat for me.

After I'd done this I discovered my pet tool gpsprune can open the new style kml. (I forked gpsprune a while ago and added a minor feature) However I'm glad to have a command-line tool as I have hundreds of tracks I want to convert.

Incidentally the phone can automatically sync the tracks to google drive, which is kinda handy and then you can download them from the site etc.

07 January 2014

Returning to commercial ASP.NET from Ruby on Rails

Why ASP.NET again after all the noise I made about Ruby on Rails? After a brief stint with commercial Ruby on Rails development I should explain why I've decided my next gig will be an ASP.NET project. In short: currently almost all the Rails work available is in London for digital agencies and start-ups, demanding on-site full time presence, and I burned out doing 3 hours a day commuting in less than half-a-year. This is not a sustainable business plan.

The emphasis on start-ups and agencies bodes well for the commercial future of Rails as many of these projects will bloom into large systems needing continuing development. I will continue to use Rails for my own projects (such as the in-progress https://github.com/timabell/symbol-library ). But for me the market in the Reading area seems too quiet to make a business success from just Rails. The final straw was being formally offered a rare local permanent Rails job working with all my favourite open source technologies (Rails, Postgres, Linux etc) only to be handed an employment contract with less job security, rights and benefits than a contractor would have. This confirmed my growing understanding of the local market not being suitable at this time.

So my updated plan of action is to return to providing programming services to the vibrant .NET market in the local area, whilst also working on a database migration product for the same market (still in the research phase), but to keep my hand in with Ruby on Rails with personal projects.

This article is for my Linked In audience, if you want to become part of my network or learn more about my professional services send me a message or invite here: http://www.linkedin.com/in/timabell

04 December 2013

Getting rails 4 up and running with rbenv on Ubuntu 13.10

Brain dump warning!

This is a follow up to http://timwise.blogspot.co.uk/2013/05/installing-ruby-2-rails-4-on-ubuntu.html and is just a list of steps needed to get a clean install of Ubuntu up to speed with an existing site.
  • get a project (includes a .ruby-version file for rbenv, and a Gemfile for bundle)
    • git clone git@github.com:timabell/symbol-library.git
  • sudo apt-get install libssl-dev libreadline-dev
  • rbenv install x.x.x-xxxx
    • autocompletes, yay!
    • .. or better still reads from .ruby-version I think so you can just run `rbenv install` if you are in the project folder
  • gem install bundler
    • from the right directory so done for right ruby version
    • rbenv rehash
  • bundle
    • will install all the gems for the project
  • don't sudo apt-get install rbenv ~ doesn't provide sufficiently up to date ruby
  • gem install rails --version 4.0.2 --no-ri --no-rdoc ~ don't need this when you have a gem file with rails in it, bundle will do it for you
  • sudo apt-get install nodejs
    • for javascript runtime (rails server throwing an error without this)
  • bundle exec rails server
  • bundle exec rails console
    • needs readline (see above)
Other stuff I like in my install
This is mostly for my own reference but maybe it'll help someone else out.