BristolCon 2012

A few days have passed since this year’s BristolCon and I thought I’d best get something down. I’m on the con committee, albeit in a fairly minor role, so I spent much of the day dashing about helping keep things ticking over. I like this; I think it’s a good way to see a small, friendly con like ours. So here’s my very personal and unofficial write-up – just some things that have stuck in my befuddled mind.

The Art Room was a fantastic improvement over previous years – the display stands provided by Roundstone Framing made the place feel really open and were far more aesthetically pleasing than the slightly cobbled-together gazebo of previous years.

Anne Sudworth and Gareth L. Powell‘s guest of honour interviews were interesting. Their interviewers, Ian Whates and Kim Lakin-Smith respectively, were very good and both had an excellent rapport with their interviewee. Colin Harvey‘s Ghost of Honour session was poignant, and I tried my best not to screw up the projections.

As for panels, I kept finding myself focussed on practicalities like watching the time, ensuring there was water and clean glasses for the panellists or helping out with the sound (the PA in programme room 1 was generously supplied by Del Lakin-Smith who was very patient with my fumbling attempts to help him set-up first thing) but I particularly remember the Colonising the Solar System and Women in Sensible Armour discussions.

Later on Gareth’s monkey was a high point, Talis Kimberley and her band performed to their usual excellent standard (although I didn’t listen to as much of this as I should have) and the quiz was, well, too hard!

I met plenty of new people, all of whom had complimentary things to say about the con. I got Philip Reeve, due to be a Guest of Honour at BristolCon 2013, to sign a copy of his latest book for my daughters.  I’d hoped to have a quick chat with Marc Gascoigne (even brought my old copy of Titan for him to sign) but missed him after the Colin Harvey memorial – perhaps at a future event. The Colinthology was an excellent buy and contains some really top-class stories, so I can recommend this as not only a good cause but a good read as well.

The rest of the committee and everyone else who helped out did a fantastic job – most of them worked far harder than I did and often in the face of sickness and pain on the day, so well done to all.

On top of it all I didn’t end up with a bad hangover the next day and I even missed the fire and pestilence. A good day all round and I’m already looking forward to next year!

 

Citrix Receiver for Linux

Citrix provide a version of their Receiver software packaged for Linux. Version 12.1 is current and is available from their website.

I’m currently running Ubuntu 11.10 64-bit (yes, I know, I intend to update soon) so I downloaded and installed the 64-bit debian package based on the CitrixICAClientHowTo. This installed with the following error:

$ sudo dpkg -i icaclient_12.1.0_amd64.deb
Selecting previously deselected package icaclient.
(Reading database ... 204950 files and directories currently installed.)
Unpacking icaclient (from icaclient_12.1.0_amd64.deb) ...
Setting up icaclient (12.1.0) ...
dpkg: error processing icaclient (--install):
subprocess installed post-installation script returned error exit status 2
Errors were encountered while processing:
icaclient

This is because the postinst script contains a function that tries to determine whether you are using an Intel or ARM type chip, and the logic used to detect Intel hasn’t been updated to include a check for x86_64 – it runs uname -m and looks for "i[0-9]86".

You can fix this by unpacking the deb and editing the regular expression in line 2648 of the postinst script to match x86_64, then rebuild the deb and install that. It works fine for me – although bear in mind the dependencies for the package: libc6-i386 (>= 2.7-1), ia32-libs, lib32z1, lib32asound2 and nspluginwrapper.

I’ve posted simple instructions for rebuilding debian packages before, but there’s lots of info out there on the web.

Hacking .deb files

This is a follow-up to my previous post about downloading the Ubuntu Flash plugin package from behind a proxy. Rather than having to go through a failed installation, editing the postinst script then re-installing, here is an alternative method where the package is downloaded, unpacked, the script edited then the package rebuilt and installed in the normal way using apt-get.

The procedure below uses flashplugin-nonfree as an example package, but this process could be used to edit any .deb with a little care – just change the package name.

Having said that, it’s a quick-and-dirty fix-up and as such only really suitable if you only need to make simple changes to the control scripts such as postinst, prerm, etc, in an existing package. Major changes to a package’s structure or contents will need more care and you should take a look at the Debian Maintainers Guide or any of the other HOWTOs and FAQs available on the web for detailed information on how to do this.

Needless to say you’ll need to do much of this as root. Have a care.

  1. Download the updated debian package using apt-get -d install flashplugin-nonfree. This will place the latest version of the package in /var/cache/apt/archives without actually installing it. Note that if you have multiple updates to do, you can use apt-get -d upgrade instead; this will also download any other packages that are currently due an upgrade at the same time – this is fine, they will be installed normally at the end of the process along with the modified package.
  2. Change your working directory to /var/cache/apt/archives then make a backup: cp flashplugin-nonfree_$version_i386.deb /root/
  3. Create a tempory directory structure where you can unpack the archive: mkdir -p /tmp/flashplugin-nonfree_$version_i386/debian/DEBIAN
  4. Extract the contents of the .deb: ar -x flashplugin-nonfree_$version_i386.deb. This will result in three files, debian-binary, data.tar.gz and control.tar.gz. You can delete the debian-binary file.
  5. Move data.tar.gz into /tmp/flashplugin-nonfree_$version_i386/debian/ and the control.tar.gz file into /tmp/flashplugin-nonfree_$version_i386/debian/DEBIAN. Unpack the archives in these locations and delete the tarballs.
  6. You can now edit the postinst script in /tmp/flashplugin-nonfree_$version_i386/debian/DEBIAN to include the proxy settings as outlined in Installing the Flash plug-in on Ubuntu from behind a proxy.
  7. Now you are ready to rebuild the package. Change directory to /tmp/flashplugin-nonfree_$version_i386/ and run dpkg-deb --build debian. This should create a file in the debian/ subdirectory called debian.deb. You may see some warnings about the control file containing user-defined fields – these can be safely ignored.
  8. Now move the debian.deb file into /var/cache/atp/archives using the same filename as the original package: mv debian.deb /var/cache/atp/archives/flashplugin-nonfree_$version_i386.deb.
  9. You should now be able to run apt-get install flashplugin-nonfree or apt-get upgradeand the package will be installed using the new .deb file complete with proxy information in the postinst script to enable downloading the binary.

Flickrtweeter: automatically tweet your flickr pics

A few weeks ago I decided to roll my own script to automatically twitter an update if I posted a photo onto my flickr pages with a certain tag.  I know that there are third party services out there that can do this for you (e.g. Snaptweet, Twittergram) but I thought it’d be an interesting project to do it myself.  As well as (obviously) requiring flickr and twitter accounts, it also requires a bit.ly account and API key as it uses this service to produce a shortened URL for the photo to include in the tweet.

The script is written in Perl and is fairly straitforward.  It pulls the Atom feed of my flickr account and checks any photos tagged “twitme” against a list of photos it has already seen and tweeted.  It then passes the photo’s URL through bit.ly to get a shortened version and builds a tweet using a standard prefix, the photo’s title from flickr, and the bit.ly’ified URL.  It then attempts to post the tweet.

The script uses LWP::Simple for HTTP GETs to flickr and bit.ly, XML::Simple to parse the responses, Storable to maintain a cache file of seen photos, Net::Twitter to talk to twitter itself and URI::Escape to escape the photo’s URL before passing it to bit.ly.  It also uses the sysopen call from Fcntl to manage a lockfile – I run it as a cron job so this seemed a sensible precaution.

It can be configured by setting variables at the start of the script.  All are commented (I hope) reasonably clearly.  It can be downloaded and used under the terms of the GNU Public License.  I originally called it flickr2twitter but as this appears to be the name of a Firefox Addon I have renamed it flickrtweeter.

Evaluating Atmail as a possible Squirrelmail replacement

I’ve been using Squirrelmail as the webmail interface for my home mail server for a few years now but recently I thought I’d give some of the alternatives out there a try to see whether a switch was in order, so this post is a look at the Atmail Open webmail client.

Atmail provide commercial email solutions but also make an open source version of their webmail client available under the Apache license.  It’s written in PHP and uses MySQL as a backend, so there were no additional software requirements for my Ubuntu 8.04 server. I’d read a write-up on Linux.com a while back and decided to give it a try.

Installation

The installation instructions on the web site are fairly basic, as are the instructions contained within the tarball itself.  I was slightly alarmed to see some pages recommending use of the MySQL root account to set up the system (e.g. on the Ubuntu forums in a post that reads like an ad and copies almost verbatim from a page on the Atmail site).  It all looked fairly straitforward, so I decided to fit the install into my existing system as follows.

First I unpacked the tarball in /usr/share to make /usr/share/atmailopen.  I then chown‘d this directory to www-data.

Next, I created a new database in MySQL and a user to access it.  I granted the user all privileges on the new database.  I called both the database and the user “atmail”.

The next step was to add a stanza to my apache configuration file under /etc/apache/sites-available.  I maintain a single site definiton and Alias in new sections from their homes in the filesystem as required. It’s all served over HTTPS for privacy and security.  I just added a stanza with simple auth and an AllowOverrides All statement. Everything else just inherited my default settings.

The final step was to slightly modify /etc/php5/apache2/php.ini with magic_quotes_gpc = Off. I had already increased the upload_max_filesize, post_max_size and memory_limit values for use with Squirrelmail – the first two were also mentioned in the Atmail documentation.

After restarting Apache, I visited the alias I’d set up for Atmail and followed the instructions. It was pretty straitforward: plug in the MySQL database and user details I’d prepared, select my local SMTP server and that was pretty much it. I was presented with a log-on page and after a struggling for a bit before figuring out that the page wouldn’t log me in without something after the @, even though my server just expects a username, I got into my inbox.

First Impressions

In no particular order, here’s a mixture of the things that have struck me about Atmail. I emphasize that these are just impressions, I haven’t spent much time troubleshooting or tweaking yet. I may update this list if my views change or I figure out how to solve any issues.

  • The UI looks quite nice, certainly more modern than Squirrelmail. It’s quite sluggish at times, particularly when accessed remotely.
  • My server backend is Courier IMAP. I have a number of nested folder trees already set-up. Atmail displays all the folders in a long column down the left of the browser window, the nested folders are labelled as “Parent.Child”, which is accurate enough but Squirrelmail hides this and allows you to expand and collapse a tree instead. This is particularly annoying when trying to drag messages to folders the are below the bottom of the window.
  • Atmail has added it’s own “Spam” folder” alongside my existing “Junk” folder. My Junk folder always falls off the bottom of the browser window due to the point mentioned above, but the Atmail Spam folder is sticky and always appears above the alphabetized folders. I’d rather use the Junk folder, as that is what my other mail applications use. To workaround I have replaced the .Spam folder in my ~/Maildir with a symlink to the .Junk folder.
  • Atmail also doesn’t appear to show unread mail counts for folders other than the Inbox – a cursory inspection of the configuration settings doesn’t show anything obvious to toggle this. Again this is something Squirrelmail does offer – I use Procmail on the server to filter incoming mail, so there are often new messages in various places throughout the tree.
  • I can’t find a setting to force mail to be viewed as plain text; there is one to turn off image loading in HTML mail so that’s something.
  • I can’t find any settings related to viewing folders in threads.
  • It’s not clear how committed Atmail are to the open source client. There don’t appear to have been any updates for a while, and there isn’t a large community of users. Squirrelmail seems far more established in these areas.

Summary

Atmail is OK, but I’m not overwhelmed. I’m not sure there’s a compelling reason to switch – I’m used to Squirrelmail and quite like it, it has an active development and support community and Atmail doesn’t appear to offer anything over and above it functionally, in fact when you consider Squirrelmail’s plugin ecosystem it probably comes in second. I will leave it installed and continue to use both side by side for a while and we’ll see which option ends up being my preferred one. Once I’ve made a final choice, I’ll update this post.

Solaris and Linux xterm terminfo differences

This problem might well be well-known to hardcore UNIX people but it had me scratching me head for a bit today so I thought I’d post about it together with my workaround.

I sometimes have cause to use a terminal-based application installed on servers running Solaris 9 that uses function keys F1F4 to navigate its internal menus. In the past I’ve accessed these from a Windows XP laptop using Putty, but recently I’ve started using a Linux machine running Xubuntu. Today I came to try and use this particular application from the Linux box but (leaving aside the fact that the F1 key is reserved for the Help system in Xubuntu’s terminal app) the function keys merely echoed back 0Q, 0R and 0S rather than performing the desired menu operations.

This made me think it was a terminfo related problem and a bit of Googling confirmed that this was most likely the root cause. I did some comparisons of the output of infocmp on both systems and it turns out that the terminfo definition for xterm used on Solaris 9 differs considerably from that on Linux in a number of ways. Specific to my problem, the sequences defined for F1F4 on Solaris 9 are different from those on Linux as follows:

Key Solaris 9 Linux
F1 \E[11~ \E0P
F2 \E[12~ \E0Q
F3 \E[13~ \E0R
F4 \E[14~ \E0S

I could see three obvious ways of dealing with this problem.

  1. Distribute an updated xterm terminfo definition to one set of machines.
  2. Create a custom terminfo definition, distribute it to all the machines and use it when ssh’ing between them.
  3. Locate an exisiting terminfo definition already shared by the machines that meets the immediate requirements.

The disadvantages of (1) include possibly breaking other things that rely on the exisiting definitions and also the fact that system policy might prevent changing this kind of setting to meet an individual’s requirement. (2) Might be a bit more promising but it means spending the time creating the definition and also assumes no issues with system policy as implied above. (3) is the path of least resisitance.

In fact (3) is the solution I currently use. The vt220 definitions are similar enough and match on the all-important function key definitions, so I’m currently setting $TERM to this value before ssh’ing from my Linux box to the Solaris servers.

Installing the Flash plug-in on Ubuntu from behind a proxy

There is a package for the Adobe Flash Player plug-in for Mozilla-based browsers in the Ubuntu repositories. If you install the package it downloads the plug-in directly from Adobe and installs it on your system. I tried to do this on a machine behind a proxy that requires authentication and it failed, despite having set up the proxy details in Synaptic, as well as in an http_proxy environment variable for both my user account and the root account in the respective .bashrc files.

There’s probably an easier way to do this, but to get around the problem I manually edited the flashplugin-nonfree.postinst file in /var/lib/dpkg/info following the failed installation attempt.  This file is a shell script, part of which sets up a wgetrc file for use by wget when downloading the plugin from the Adobe website.  As root, add a section for your http proxy in here, something like as follows:

# setting wget options
:> wgetrc
echo "noclobber = off" >> wgetrc
echo "dir_prefix = ." >> wgetrc
echo "dirstruct = off" >> wgetrc
echo "verbose = on" >> wgetrc
echo "progress = dot:default" >> wgetrc
echo "http_proxy=http://user:passwd@proxy.tld:port" >> wgetrc

Then in Synaptic you can mark the flashplugin-nonfree package for reinstallation and it should download and install without further problems.

You may be able to download the package (apt-get -d?), unpack it manually, edit the postinst file, then install to avoid having to sit through a failed attempt at installing first – I haven’t tried this myself. UPDATE – I have now – see Hacking .deb files.

Dual boot Windows XP and CentOS 5 with NTLDR

I wanted to install CentOS onto a spare partition on a machine with Windows XP already installed, leaving the MBR as it was and using the NTLDR bootloader. CentOS installed fine onto the partition but the installer only seemed to offer two options for setting up GRUB: installing to the MBR or not installing at all; there wasn’t an obvious way of getting it to install onto the boot sector of the CentOS partition. There might have been some more advanced options in the installer but I didn’t have the time to experiment (I’m fairly new to CentOS/Red Hat/Fedora, being more of a Slackware and Debian-family user normally), so I elected to not install and set up afterwards. This leaves CentOS unbootable, but with a bootdisk like sysresccd and a basic knowledge of GRUB it’s fairly straitforward to fix this. Here’s a rundown on what I did, which might be useful in a similar situation or when having trouble with a b0rked GRUB installation.

Obviously the partition details specified below are derived from the system I was working on, where the hard disk is/dev/sda, Windows is on /dev/sda1 and CentOS on /dev/sda2. There is a swap partition on /dev/sda3 but it isn’t relevant for this exercise.

Also, you follow these instructions at your peril. It is all too easy to screw up and make your entire system unbootable, lose data or just render the whole process more painful than it has to be. Back-up, read relevent documentation before you begin, back-up, check the sytax of the commands you type before executing them and have I said back-up?

  1. First, if you are not familiar with GRUB browse the documentation. The steps below are fairly terse and not pitched at total beginners. They also assume a familiarity with basic shell commands.
  2. Once the CentOS install is complete reboot using sysresccd.
  3. Create a mount point for CentOS’s root partition, e.g. /boot/centos, and mount it:

    # mkdir /boot/centos
    # mount -t ext3 /dev/sda2 /mnt/centos
  4. Mount your Windows partition (/mnt/windows is usually present):

    # ntfs-3g /dev/sda1 /mnt/windows
  5. Copy the following files:

    # cp /mnt/centos/usr/share/grub/i386-redhat/stage1 /mnt/centos/boot/grub
    # cp /mnt/centos/usr/share/grub/i386-redhat/stage2 /mnt/centos/boot/grub
    # cp /mnt/centos/usr/share/grub/i386-redhat/*_stage1_5 /mnt/centos/boot/grub
  6. Launch GRUB. Fortunately sysresccd and CentOS 5 both use GNU GRUB 0.97, so there are no compatibility problems.

    # grub
  7. This will dump you into GRUB’s command interpreter. You need to set the root drive and then install GRUB into the partition’s boot sector as follows; output from the commands represented by elipses but note that you might get warning messages – as long as the final message indicates success, you should be fine. Oh, and be careful to ensure you install this onto the correct partition – remember GRUB starts counting partitions at 0, not 1 like Linux!

    grub> root (hd0,1)
    ...
    grub> setup (hd0,1)
    ...
    grub> quit
  8. Next you need to create the file that NTLDR will use to hand-off booting CentOS to GRUB, basically the first 512 bytes of the partition:

    dd if=/dev/sda2 of=/mnt/windows/bootsect.lnx bs=512 count=1
  9. Next modify /mnt/windows/boot.ini to include a line for CentOS at the end. The cautious may wish to reboot into Windows to modify this file.

    C:\bootsect.lnx="CentOS 5"
  10. Reboot. You should now see a boot menu offering you Windows and CentOS 5. If you don’t, check C:\boot.ini for a timeout stanza and edit accordingly. Selecting CentOS 5 will dump you back into GRUB’s shell – GRUB is installed but doesn’t have a configuration file set up. You can use the following commands to boot CentOS 5:

    grub> root (hd0,1)
    grub> kernel /boot/vmlinuz-2.6.18-8.el5 ro root=/dev/sda2 rhgb quiet
    grub> initrd /boot/initrd-2.6.18-8.el5.img
    grub> boot
  11. CentOS 5 will then boot and you can complete the installation. Depending upon your choices, you may need to reboot again using the above procedure.
  12. Once complete you can configure GRUB to boot automatically by creating /boot/grub/device.map, /boot/grub/grub.conf and /boot/grub/menu.lst. The device.map file should just contain a single line mapping the linux hard disk device to a GRUB device notation:

    (hd0)   /dev/sda

    The grub.conf file specifies the various boot options, in our case a fairly straitforward single kernel and ramdisk image:

    default=0
    timeout=5
    hiddenmenu
    title   CentOS (2.6.18-8.el5)
            root (hd0,1)
            kernel /boot/vmlinuz-2.16.18-8.el5 ro root=/dev/sda2 rhgb quiet
            initrd /boot/initrd-2.6.18-8.el5.img

    Then, with /boot/grub as your working directory, do:

    ln -s grub.conf menu.lst
  13. Now when you reboot and select CentOS from the Windows boot menu GRUB should automatically start CentOS after a five second timeout. You can add additional entries to the grub.conf file – custom or testing kernels, memtest86+, etc – then view the menu within the timeout period to select them.

I’m more than happy to get feedback on stuff like this, even if it’s just to tell me I’m an idiot and could’ve done it in a simpler way. There’s no commenting system here at the moment, but please feel free to email me with questions or suggestions, or even if you just found this useful.

Weblog reboot

It’s remotely possible that last night someone might have dropped by this site to be greeted by an HTTP 500 or pulled a feed with some rather old content, as I finally got around to re-posting the content from between November 2002 and March 2005. I’d hoped to recategorise everything into the simpler structure I created following the domain move and rebuild, but that’s just not going to happen so I have dumped the entire structure of the old site under weblog/original. There’s an archive page; I intend to merge this with the current site’s archive, so bear that in mind. I might also whip up a sitemap type page where you can browse by topic/directory.

It wasn’t too difficult a task, but it did involve a but of fiddling and some quick fixes once the content and metadata files were in place as I overlooked the fact that I switched file extensions when redesigning the site and this initially caused me a bit of headscratching. Thanks to sed, find, xargs and rename I soon got this sorted out. If you ever need to change multiple file extensions in a directory hierarchy on a linux box (in this case .txt to .blx) try this for size:

find ./path/ -iregex ".+\.txt$" -print0 | xargs -0 rename "s/\.txt$/\.blx/"

or maybe:

find ./path/ -iregex ".+\.txt$" -exec rename "s/\.txt$/\.blx/" '{}' \;

But you’ll find the first command works faster, particularly if you do a test run with the -n switch on rename.

Unfortunately a lot of the markup in the older posts is HTML rather than XHTML, so you might get XML parser errors on same pages if you’re using a browser that can handle application/xhtml (Firefox, Opera). I’m slowly working my way through the site trying to fix these problems but it might take me a while, so please bear with me – a lot of the older posts were handcoded using a variety of text editors on various platforms and often in a hurry and markup wasn’t always my primary consideration. Also, I’ve yet to get the final batch of redirects from the old URL sorted so there might be the odd dead link or missing image, but that should be sorted faster than the parser errors.

Ubuntu Dapper HAL update DOESN’T break USB mass storage automounting

The recent update to HAL available for Ubuntu Dapper seems to break USB mass storage device automounting. The broken version is 0.5.7-1ubuntu18.2. (Oops – no it doesn’t… see below).

I haven’t figured out why yet (I will post when/if I do) (ahem), but here’s how to downgrade if you’re affected by this and don’t want to wait on a fix. (But this bit might still be useful if you ever need to downgrade apt packages and fix them to a specific version while waiting for a fix that you really need.)

Run the following command as root:

apt-get install hal=0.5.7-1ubuntu18 \
libhal1=0.5.7-1ubuntu18 \
libhal-storage1=0.5.7-1ubuntu18

You’ll need to reboot. You can use an entry in /etc/apt/preferences to keep these packages to this version until new packages that don’t break HAL are available. Create that file if it doesn’t exist and add the following lines:

Package: hal
Pin: version 0.5.7-1ubuntu18
Pin-Priority: 1000

Package: libhal1
Pin: version 0.5.7-1ubuntu18
Pin-Priority: 1000

Package: libhal-storage1
Pin: version 0.5.7-1ubuntu18
Pin-Priority: 1000

You should really read man 5 apt_preferences, and you should monitor what updates to these packages become available. I can’t guarantee that keeping these packages at this version won’t break anything else.

I have to say that this has soured me a little on Ubuntu. One of the reasons I chose this distro was because I don’t have anything like as much time to myself as I once did and I don’t want to spend what time I do have troubleshooting minor conifg issues like this on my machine, and Ubuntu has a reputation as a very stable, well maintained distro. USB drive automounting might not sound like a very important feature, but it’s this kind of thing that will put off non-technical users, or even technical ones with small kids and short tempers. Still, I’ll try and look into this problem and maybe file a bug report if no one else has already.

Update 11th December 2006

I now regret writing that last paragraph. To be honest I had misgivings almost immediately after posting it as I thought it a bit harsh but I decided to leave it. Anyway, I have now found that the upgrade did not break USB automounting at all – it was the device I was using to test it. My fault. PEBKAC. The device in question is my Sony Ericsson mobile phone, a K750i. Quite a nice phone, but it is a little temperamental at times – prone to occasional crashes and lockups. Normally this gets mounted as a mass storage device when I plug it in as it contains a 128MB Memory Stick Duo, but every so often it fails for reasons unknown (syslog just says
Device offlined - not ready after error recovery). Having done some reading up on the way that HAL, D-BUS, udev and gnome-volume-manager work I upgraded HAL again ready to start troubleshooting only to find that everything was working fine, then I encountered the error with my phone and all became clear. My CF card reader and Seagate external hard disk both work exactly as they should. The phone mounts most times, but occasionally fails. I think I’ll have less luck toubleshooting that than I would HAL et al. My apologies to Ubuntu for my unwarranted harsh words above.