Hacking .deb files

This is a follow-up to my previous post about downloading the Ubuntu Flash plugin package from behind a proxy. Rather than having to go through a failed installation, editing the postinst script then re-installing, here is an alternative method where the package is downloaded, unpacked, the script edited then the package rebuilt and installed in the normal way using apt-get.

The procedure below uses flashplugin-nonfree as an example package, but this process could be used to edit any .deb with a little care – just change the package name.

Having said that, it’s a quick-and-dirty fix-up and as such only really suitable if you only need to make simple changes to the control scripts such as postinst, prerm, etc, in an existing package. Major changes to a package’s structure or contents will need more care and you should take a look at the Debian Maintainers Guide or any of the other HOWTOs and FAQs available on the web for detailed information on how to do this.

Needless to say you’ll need to do much of this as root. Have a care.

  1. Download the updated debian package using apt-get -d install flashplugin-nonfree. This will place the latest version of the package in /var/cache/apt/archives without actually installing it. Note that if you have multiple updates to do, you can use apt-get -d upgrade instead; this will also download any other packages that are currently due an upgrade at the same time – this is fine, they will be installed normally at the end of the process along with the modified package.
  2. Change your working directory to /var/cache/apt/archives then make a backup: cp flashplugin-nonfree_$version_i386.deb /root/
  3. Create a tempory directory structure where you can unpack the archive: mkdir -p /tmp/flashplugin-nonfree_$version_i386/debian/DEBIAN
  4. Extract the contents of the .deb: ar -x flashplugin-nonfree_$version_i386.deb. This will result in three files, debian-binary, data.tar.gz and control.tar.gz. You can delete the debian-binary file.
  5. Move data.tar.gz into /tmp/flashplugin-nonfree_$version_i386/debian/ and the control.tar.gz file into /tmp/flashplugin-nonfree_$version_i386/debian/DEBIAN. Unpack the archives in these locations and delete the tarballs.
  6. You can now edit the postinst script in /tmp/flashplugin-nonfree_$version_i386/debian/DEBIAN to include the proxy settings as outlined in Installing the Flash plug-in on Ubuntu from behind a proxy.
  7. Now you are ready to rebuild the package. Change directory to /tmp/flashplugin-nonfree_$version_i386/ and run dpkg-deb --build debian. This should create a file in the debian/ subdirectory called debian.deb. You may see some warnings about the control file containing user-defined fields – these can be safely ignored.
  8. Now move the debian.deb file into /var/cache/atp/archives using the same filename as the original package: mv debian.deb /var/cache/atp/archives/flashplugin-nonfree_$version_i386.deb.
  9. You should now be able to run apt-get install flashplugin-nonfree or apt-get upgradeand the package will be installed using the new .deb file complete with proxy information in the postinst script to enable downloading the binary.

Flickrtweeter: automatically tweet your flickr pics

A few weeks ago I decided to roll my own script to automatically twitter an update if I posted a photo onto my flickr pages with a certain tag.  I know that there are third party services out there that can do this for you (e.g. Snaptweet, Twittergram) but I thought it’d be an interesting project to do it myself.  As well as (obviously) requiring flickr and twitter accounts, it also requires a bit.ly account and API key as it uses this service to produce a shortened URL for the photo to include in the tweet.

The script is written in Perl and is fairly straitforward.  It pulls the Atom feed of my flickr account and checks any photos tagged “twitme” against a list of photos it has already seen and tweeted.  It then passes the photo’s URL through bit.ly to get a shortened version and builds a tweet using a standard prefix, the photo’s title from flickr, and the bit.ly’ified URL.  It then attempts to post the tweet.

The script uses LWP::Simple for HTTP GETs to flickr and bit.ly, XML::Simple to parse the responses, Storable to maintain a cache file of seen photos, Net::Twitter to talk to twitter itself and URI::Escape to escape the photo’s URL before passing it to bit.ly.  It also uses the sysopen call from Fcntl to manage a lockfile – I run it as a cron job so this seemed a sensible precaution.

It can be configured by setting variables at the start of the script.  All are commented (I hope) reasonably clearly.  It can be downloaded and used under the terms of the GNU Public License.  I originally called it flickr2twitter but as this appears to be the name of a Firefox Addon I have renamed it flickrtweeter.

Act now to protect Data Protection

Protecting your bits. Open Rights Group

The Open Rights Group, of which I am a founder member, has announced a call to action to try and prevent the inclusion of Clause 152 in the Coroners and Justice Bill, due to go before Parliament in the near future.

This clause, should it become law, will essentially remove the protections we enjoy under the Data Protection Act and allow Government to mandate the sharing of your personal data with no effective oversight.

This means that data you have provided to the Government for one purpose, with a guarantee under law that it would be used soley for that purpose, would be available for other purposes without the need for further consent.  The other purposes could be pretty much anything – this is not necessarily about security or terrorism or immigration control or any of the other hot-button topics Labour have used over the past few years to justify their more authoritarian and intrusive policies.

I’m not going to go over the details any further in this post, my intentions here are to flag the issue and help in a small way to raise awareness.  There is a lot of information on the Bill and this clause available on the internet, follow the link above to the Open Rights Group site or just trawl through the UK news sites for more.

If this concerns you please consider joining the campaign to get this clause removed from the Bill.  Write to your MP, visit your MP, dicuss with friends, family and colleagues – whatever you have time for.

Update 8th March 2009

Great News – it looks like the proposal has been removed from the Bill (Guardian, Telegraph).  One small victory for common sense and reason.

MPs Expenses

This is yesterday’s news really but I had to post anyway. The UK government planned to introduce a Statutory Instrument to Parliament today that would have exempted MPs expenses from the Freedom of Information Act, but performed a rather embarrassing last-minute U-turn yesterday afternoon (Guardian, BBC). Gordon Brown claimed that this was due to the Tories pulling their support for the Bill, and there had been growing momentum behind a net-based campaign to oppose the bill led by the excellent MySociety who have claimed at least some of the credit for the U-turn.

Whatever it was that caused Cameron to pull Tory support or really made Gordon and his cronies change their minds, the end result is the right one for Parliament and for Open Government. This was an attempt to exempt themselves from entirely reasonable exposure that they no doubt hoped no one would notice being slipped through – they announced their intentions on the same day as the Heathrow runway decision was published. It’s no wonder that they are regarded as cynical and self-serving! Let’s hope that Tom Steinberg is right when he says “There’s no such thing as a good day to bury bad news any more, the Internet has seen to that.”

On a more personal level, the Labour Government has been responsible for introducing a number of quite intrusive proposals and changes to the law which require the public to sacrifice their privacy and anonymity, for example the national Identity register and the Communications Data Bill. The oft-repeated answer to their civil libertarian critics has been “if you’ve nothing to hide, then you’ve nothing to fear”, and although I think that this statement is deeply flawed I must admit to a certain satisfaction now that they’ve been forced to eat some of their own dog food.

Booking Train Tickets Online

This evening Polly and I were trying to book some tickets for a trip she’s making to see an old friend in Manchester.  First stop was the National Rail Enquiries Site, which offers a gateway service to book tickets.  You fill in your preferred date and time of travel, and it lists the tickets that are available.  It then redirects you to a choice of rail operator or third party sites where you can actually make a purchase, passing through the details of the journeys you have selected.

In theory, this sounds great – a really good use of the web. One place to go and identify what tickets and trains are available leading to a choice of commercial sites where you can make a purchase.  In practice, it turns out to be frustrating and dysfunctional.

We selected the cheapest available tickets and were duly packed off to a train operator’s site to make the purchase.  There were a few more hoops to jump through at the new site, about five screens to page through to confirm the selection, seating preferences, etc.  One of the screens facilitated selecting the train and ticket type, with unavailable tickets displayed but inaccessible.

So far so good, but having reached the end of the process the final click redirected back to the front page of the operator’s site with the message that the site was unable to complete the reservation and that we should choose again.  That was it.  There was no guidance as to which part of the process failed.  Was the whole purchasing system down?  Had the tickets we’d selected sold out while we were working our way through the site, and if so which journey was the problem (we were buying two tickets on different trains on different days)?  Something else?

Going back through the process still showed the options we’d selected as available and we were allowed to select them. Going to another operator’s site and trying to purchase the same tickets also showed them as available, but it also failed with a similar message (it looks like all these sites operate from the same back end, which renders the choice of retailer somewhat moot).

Why advertise the tickets as available and then not allow the transaction? Maybe the whole system was struggling, so to test we selected the most expensive tickets available (over £100 each way) and lo and behold, the site allowed the reservation!

By now we’d spent a good twenty minutes mucking around trying to place an order. We were left with trial and error to determine which tickets the system would allow us to buy, and which it would not. We tried numerous combinations and permutations before reaching the conclusion that the cheapest tickets we’d be allowed to purchase came in at just under double the price quoted when we’d performed the initial availability check.

Of course by now we’d invested a fair amount of time on this, plus it’s known that the numbers of cheap tickets are limited so the closer you get to the travel date the more you are likely to have to pay. Even though we’d allowed a few weeks, this is the kind of task you just want settled and out of the way, plus having to go through the process a second time… eugh. So we purchased the tickets.

Now I’m quite sure that the small print on the sites states that it’s not always accurate, and perhaps a more generous soul would be prepared to grant the UK rail industry the benefit of the doubt that they can’t be expected to provide accurate real-time data as to the availability of tickets. But if the purchasing system has the intelligence to know that the reservations aren’t available, could this not be leveraged by the availability search? It felt like we were lured in with a cheap quote, only to find the salesman up the price at the last minute when our investment of time and energy predisposed us to just accept it. That may not be the reality, but that’s what it felt like.

Who’s been losing your data?

Only those with their heads in the sand over the last couple of years can fail to have missed the steady stream of reports of personal data lost by both Government and private companies.

The Open Rights Group, of which I am a member, maintain a page on UK Privacy Debacles listing all the incidences they are aware of.  In December last year they also published a simple on-line survey for people to use to determine how likely it is that their data has been involved in any of these losses.

I’d recommend running through the survey.  If nothing else it gives you a sense of the amount of personal data lost and the huge range of organisations involved.  Food for thought given the current popularity of big, intrusive database projects with politicians.

Evaluating Atmail as a possible Squirrelmail replacement

I’ve been using Squirrelmail as the webmail interface for my home mail server for a few years now but recently I thought I’d give some of the alternatives out there a try to see whether a switch was in order, so this post is a look at the Atmail Open webmail client.

Atmail provide commercial email solutions but also make an open source version of their webmail client available under the Apache license.  It’s written in PHP and uses MySQL as a backend, so there were no additional software requirements for my Ubuntu 8.04 server. I’d read a write-up on Linux.com a while back and decided to give it a try.

Installation

The installation instructions on the web site are fairly basic, as are the instructions contained within the tarball itself.  I was slightly alarmed to see some pages recommending use of the MySQL root account to set up the system (e.g. on the Ubuntu forums in a post that reads like an ad and copies almost verbatim from a page on the Atmail site).  It all looked fairly straitforward, so I decided to fit the install into my existing system as follows.

First I unpacked the tarball in /usr/share to make /usr/share/atmailopen.  I then chown‘d this directory to www-data.

Next, I created a new database in MySQL and a user to access it.  I granted the user all privileges on the new database.  I called both the database and the user “atmail”.

The next step was to add a stanza to my apache configuration file under /etc/apache/sites-available.  I maintain a single site definiton and Alias in new sections from their homes in the filesystem as required. It’s all served over HTTPS for privacy and security.  I just added a stanza with simple auth and an AllowOverrides All statement. Everything else just inherited my default settings.

The final step was to slightly modify /etc/php5/apache2/php.ini with magic_quotes_gpc = Off. I had already increased the upload_max_filesize, post_max_size and memory_limit values for use with Squirrelmail – the first two were also mentioned in the Atmail documentation.

After restarting Apache, I visited the alias I’d set up for Atmail and followed the instructions. It was pretty straitforward: plug in the MySQL database and user details I’d prepared, select my local SMTP server and that was pretty much it. I was presented with a log-on page and after a struggling for a bit before figuring out that the page wouldn’t log me in without something after the @, even though my server just expects a username, I got into my inbox.

First Impressions

In no particular order, here’s a mixture of the things that have struck me about Atmail. I emphasize that these are just impressions, I haven’t spent much time troubleshooting or tweaking yet. I may update this list if my views change or I figure out how to solve any issues.

  • The UI looks quite nice, certainly more modern than Squirrelmail. It’s quite sluggish at times, particularly when accessed remotely.
  • My server backend is Courier IMAP. I have a number of nested folder trees already set-up. Atmail displays all the folders in a long column down the left of the browser window, the nested folders are labelled as “Parent.Child”, which is accurate enough but Squirrelmail hides this and allows you to expand and collapse a tree instead. This is particularly annoying when trying to drag messages to folders the are below the bottom of the window.
  • Atmail has added it’s own “Spam” folder” alongside my existing “Junk” folder. My Junk folder always falls off the bottom of the browser window due to the point mentioned above, but the Atmail Spam folder is sticky and always appears above the alphabetized folders. I’d rather use the Junk folder, as that is what my other mail applications use. To workaround I have replaced the .Spam folder in my ~/Maildir with a symlink to the .Junk folder.
  • Atmail also doesn’t appear to show unread mail counts for folders other than the Inbox – a cursory inspection of the configuration settings doesn’t show anything obvious to toggle this. Again this is something Squirrelmail does offer – I use Procmail on the server to filter incoming mail, so there are often new messages in various places throughout the tree.
  • I can’t find a setting to force mail to be viewed as plain text; there is one to turn off image loading in HTML mail so that’s something.
  • I can’t find any settings related to viewing folders in threads.
  • It’s not clear how committed Atmail are to the open source client. There don’t appear to have been any updates for a while, and there isn’t a large community of users. Squirrelmail seems far more established in these areas.

Summary

Atmail is OK, but I’m not overwhelmed. I’m not sure there’s a compelling reason to switch – I’m used to Squirrelmail and quite like it, it has an active development and support community and Atmail doesn’t appear to offer anything over and above it functionally, in fact when you consider Squirrelmail’s plugin ecosystem it probably comes in second. I will leave it installed and continue to use both side by side for a while and we’ll see which option ends up being my preferred one. Once I’ve made a final choice, I’ll update this post.

Solaris and Linux xterm terminfo differences

This problem might well be well-known to hardcore UNIX people but it had me scratching me head for a bit today so I thought I’d post about it together with my workaround.

I sometimes have cause to use a terminal-based application installed on servers running Solaris 9 that uses function keys F1F4 to navigate its internal menus. In the past I’ve accessed these from a Windows XP laptop using Putty, but recently I’ve started using a Linux machine running Xubuntu. Today I came to try and use this particular application from the Linux box but (leaving aside the fact that the F1 key is reserved for the Help system in Xubuntu’s terminal app) the function keys merely echoed back 0Q, 0R and 0S rather than performing the desired menu operations.

This made me think it was a terminfo related problem and a bit of Googling confirmed that this was most likely the root cause. I did some comparisons of the output of infocmp on both systems and it turns out that the terminfo definition for xterm used on Solaris 9 differs considerably from that on Linux in a number of ways. Specific to my problem, the sequences defined for F1F4 on Solaris 9 are different from those on Linux as follows:

Key Solaris 9 Linux
F1 \E[11~ \E0P
F2 \E[12~ \E0Q
F3 \E[13~ \E0R
F4 \E[14~ \E0S

I could see three obvious ways of dealing with this problem.

  1. Distribute an updated xterm terminfo definition to one set of machines.
  2. Create a custom terminfo definition, distribute it to all the machines and use it when ssh’ing between them.
  3. Locate an exisiting terminfo definition already shared by the machines that meets the immediate requirements.

The disadvantages of (1) include possibly breaking other things that rely on the exisiting definitions and also the fact that system policy might prevent changing this kind of setting to meet an individual’s requirement. (2) Might be a bit more promising but it means spending the time creating the definition and also assumes no issues with system policy as implied above. (3) is the path of least resisitance.

In fact (3) is the solution I currently use. The vt220 definitions are similar enough and match on the all-important function key definitions, so I’m currently setting $TERM to this value before ssh’ing from my Linux box to the Solaris servers.

Happy New Year

Here’s the obligatory happy new year post! I would have written it yesterday but I was at work doing my bit of the holiday cover. I’m afraid to say that my resolutions this year are fairly unoriginal:

  • be more healthy, in particular:
    • stop smoking completely once and for all
    • no alcohol on weekday evenings (one just too easily leads to another 😉
    • start swimming regularly again
  • Spend some time early in the year thinking about just what direction I want to take from here in life – career, home… we’ve often talked of living abroad while the kids are still young… maybe it’s time to start planning ahead a bit more.
  • Become more financially literate. I really need to be more clued on on stuff like mortgages, pensions and general investments, especially given the current economic climate.

And right now that’s about it. Anyway, happy new year to anyone reading this.

Installing the Flash plug-in on Ubuntu from behind a proxy

There is a package for the Adobe Flash Player plug-in for Mozilla-based browsers in the Ubuntu repositories. If you install the package it downloads the plug-in directly from Adobe and installs it on your system. I tried to do this on a machine behind a proxy that requires authentication and it failed, despite having set up the proxy details in Synaptic, as well as in an http_proxy environment variable for both my user account and the root account in the respective .bashrc files.

There’s probably an easier way to do this, but to get around the problem I manually edited the flashplugin-nonfree.postinst file in /var/lib/dpkg/info following the failed installation attempt.  This file is a shell script, part of which sets up a wgetrc file for use by wget when downloading the plugin from the Adobe website.  As root, add a section for your http proxy in here, something like as follows:

# setting wget options
:> wgetrc
echo "noclobber = off" >> wgetrc
echo "dir_prefix = ." >> wgetrc
echo "dirstruct = off" >> wgetrc
echo "verbose = on" >> wgetrc
echo "progress = dot:default" >> wgetrc
echo "http_proxy=http://user:passwd@proxy.tld:port" >> wgetrc

Then in Synaptic you can mark the flashplugin-nonfree package for reinstallation and it should download and install without further problems.

You may be able to download the package (apt-get -d?), unpack it manually, edit the postinst file, then install to avoid having to sit through a failed attempt at installing first – I haven’t tried this myself. UPDATE – I have now – see Hacking .deb files.