Archives 2009

HowTo: Quickly transfer files from an Ubuntu box to another PC over a network without installing Samba, SSH or FTP.

Let’s say you have an Ubuntu PC and a second Windows PC or Mac. You need to do a quick transfer of a file or two from the Ubuntu box, but you really don’t want to go through the hassle of installing and configuring Samba or FTP just for the sake of transferring a couple of files.

Of course you could use a USB flash drive, but it takes twice as long to copy a file that way because you have to copy it to the flash drive and then copy it again from the flash drive to the destination PC. Besides that, what if you don’t have a flash drive big enough to transfer the files you want? Is there a quick and dirty way to transfer some files over a network without the need to install additional software to bridge the compatibility divide?

Indeed there is…

NOTE: This method is not suitable for transferring entire directories of files. While it is possible to transfer multiple files at once using this method, it is primarily intended for the transfer of very small quantities of files due to the fact that you have to initiate the transfer of each file manually – you cannot multi-select files for transfer unless you archive those files into a tarball first.

On the Ubuntu PC, open a terminal and type in the following at the $ prompt:

$ python -m SimpleHTTPServer

If this returns an error when you hit Enter, you are probably using an old version of Python, in which case use the following command instead:

$ python -c "import SimpleHTTPServer;SimpleHTTPServer.test()"

When you hit Enter, you should see a message similar to the following:

Serving HTTP on port 8000 ...

What we have done is started a basic mini web server using Python on port 8000 which will now happily serve files from the current directory you started the Python command from! Now open up a web browser on the other PC and, assuming your Ubuntu PC’s IP address is, surf to the following web address:

Viola! A full directory listing on the Ubuntu PC is presented that you can now navigate and download files from without needing to install any other software to effect a transfer. Just right-click and save like any normal download link on any ordinary website.

If you started the Python command from your Home directory, then the root of the site starts from your Home directory. If you change to another directory before launching the Python command, the web server will serve files from that directory instead. Standard security rules apply – whatever access your Ubuntu user has will be applied to the Python web server. It is not recommended that you run this command as root.

When you’re done, simply press CTRL+C to stop the Python web server on the Ubuntu PC.

Happy file transfers! Smilie: :)

HowTo: Migrate an Apt-Mirror-generated Ubuntu archive to another mirror source or merge a foreign Apt-Mirror archive into yours

So, you’ve gone and created your very own local Ubuntu mirror using Apt-Mirror, and you’ve come across a situation similar to:

  • You’ve decided to change where you update your Apt-Mirror archive from (eg: you’ve changed ISP’s or feel that another source is more reliable than your current one to update from)
  • You’re adding another large repository to your Apt-Mirror archive (such as the next version of Ubuntu) and don’t have the quota to download it, so you’re getting a friend to download it for you from their free server using Apt-Mirror (eg: iiNet and Internode customers can access their respective Ubuntu mirrors for free), so you need to be able to merge it with your own Apt-Mirror archive and have it update from your preferred source afterwards.

So how do you do this? Read on.

Migrating your Apt-Mirror archive to update from a new source

This one is really easy. Let’s say you are updating your Ubuntu mirror from Internode, but now want to get your updates from iiNet. To make this happen you need to change the following files:

  • Your /etc/apt/mirror.list file needs to be updated to point to the new source, and
  • the Apt-Mirror’s record of downloaded files needs to be updated so that it doesn’t waste time trying to re-download the entire mirror again not realising that it’s already got 99% of all the files already, because Apt-Mirror tracks the files it has downloaded by the source URL and filename, not just the filenames themselves.

So let’s go through this.

  1. Open a terminal load your /etc/apt/mirror.list file into your favourite text editor. In this case I will use the Nano text editor:

    $ sudo nano /etc/apt/mirror.list
  2. In your mirror.list file, The lines for updating Ubuntu 32 and 64-bit versions plus source code from Internode can look similar to this:

    # Ubuntu 9.10 Karmic Koala 32-bit
    deb-i386 karmic main restricted universe multiverse
    deb-i386 karmic-updates main restricted universe multiverse
    deb-i386 karmic-backports main restricted universe multiverse
    deb-i386 karmic-security main restricted universe multiverse
    deb-i386 karmic-proposed main restricted universe multiverse

    # Ubuntu 9.10 Karmic Koala 64-bit
    deb-amd64 karmic main restricted universe multiverse
    deb-amd64 karmic-updates main restricted universe multiverse
    deb-amd64 karmic-backports main restricted universe multiverse
    deb-amd64 karmic-security main restricted universe multiverse
    deb-amd64 karmic-proposed main restricted universe multiverse

    # Ubuntu 9.10 Karmic Koala Source
    deb-src karmic main restricted universe multiverse
    deb-src karmic-updates main restricted universe multiverse
    deb-src karmic-backports main restricted universe multiverse
    deb-src karmic-security main restricted universe multiverse
    deb-src karmic-proposed main restricted universe multiverse
  3. We need to change the Internode URL to the iiNet URL, so bring up Nano’s search and replace function by pressing CTRL+Backslash (“\”Smilie: ;)).
  4. Now type in the text to replace, in this case:
  5. Press Enter and you’ll be prompted for the text to replace this with. In this case it’s:
  6. Press Enter and Nano will find the first occurrence of the Internode text string and highlight it for you. If the selection is correct, press “A” on the keyboard to automatically replace “all” occurrences.
  7. Once the update is done, manually go back and visually verify that all the entries were changed correctly.
  8. When you’re happy, save your changes by pressing CTRL+X, then “Y” and then Enter.
  9. Now we need to update the Apt-Mirror record of downloaded files. First, let’s take a backup of the index in case you stuff up. Type in:

    $ sudo cp /var/spool/apt-mirror/var/ALL /var/spool/apt-mirror/var/ALL_Backup

    NOTE: the filename “ALL” must be in uppercase
  10. Now let’s bring up the original file into the Nano text editor.

    $ sudo nano /var/spool/apt-mirror/var/ALL
  11. Depending how large your index file is, there may be a brief delay while Nano opens it up. Once it appears, do the same search and replace as you did in steps 3-6 again. Note: If the editor comes up blank, then you have not opened up the index file – check your path spelling in Step 9 and try again.
  12. Save your changes by pressing CTRL+X, then “Y” and then Enter.
  13. Finally, we need to modify the Apt-Mirror’s cache of downloaded files so that its directory structure matches that of the new source. In the case of iiNet, you’ll notice it’s URL has one less ubuntu word in it compared to Internode’s URL, so we’ll need to move some directories to eliminate the extra ubuntu directory.

    At the terminal, move the dists and pool directories of the mirrored files one directory back using the commands:

    $ sudo mv /var/spool/apt-mirror/mirror/ /var/spool/apt-mirror/mirror/
    $ sudo mv /var/spool/apt-mirror/mirror/ /var/spool/apt-mirror/mirror/

  14. Now rename the directory to become the name of the iiNet server:

    $ sudo mv /var/spool/apt-mirror/mirror/ /var/spool/apt-mirror/mirror/
  15. The directory structure now matches iiNet’s server and your ALL file is up to date, so now we can test your changes by launching Apt-Mirror. Launch it manually with:

    $ apt-mirror
  16. Watch the output. First Apt-Mirror will download all the repository indexes from the new location and will compare the files presented in those indexes to your local index of downloaded files (the modified ALL file). It will skip all files already listed as being present and will only download new files not listed in your local mirror. You should find Apt-Mirror advises only a small subset of data to download, perhaps only a few megabytes or no more than a gigabyte or two since your last update under the old setup. If you see that Apt-Mirror wants to download some 30GB or more, then you have made an error in changing the URL in the ALL index file or you incorrectly renamed the mirror directories. Press CTRL+C to stop Apt-Mirror, and go check your configuration from Step 10.

    $ apt-mirror
    Downloading 1080 index files using 5 threads...
    Begin time: Wed Dec  9 15:59:23 2009
    [5]... [4]... [3]... [2]... [1]... [0]...
    End time: Wed Dec  9 16:00:45 2009


    1.7 GiB will be downloaded into archive.
    Downloading 998 archive files using 5 threads...
    Begin time: Wed Dec  9 16:02:31 2009
    [5]... [4]... [3]... [2]... [1]... [0]...
    End time: Wed Dec  9 16:54:15 2009

    207.4 MiB in 256 files and 1 directories can be freed.
    Run /var/spool/apt-mirror/var/ for this purpose.

  17. You’re done! Pat yourself on the back. Smilie: :)

Inserting a foreign Apt-Mirror archive into your own archive

This one is slightly more involved, but is not difficult. In the case of a full Ubuntu Mirror, let’s say you were adding an Ubuntu Karmic mirror archive taken from iiNet’s mirror servers into your own local Apt-Mirror archive that featured only Intrepid and Jaunty, both of which you are updating from Internode’s mirror servers. There are some obstacles we need to overcome such as:

  • Continuing to perform future updates for the Karmic repository from Internode rather than iiNet.
  • The foreign iiNet Karmic archive contains lots of files that you already have in your own archive – files that are common between all releases of Ubuntu. How do you filter those ones out and only copy the new files?
  • Finally, how do you update the Apt-Mirror index file with the potentially thousands of new entries from the foreign archive? How do you avoid duplicate lines potentially confusing Apt-Mirror?

Follow these steps:

  1. First ensure that you have the full copy of the foreign Apt-Mirror archive supplied on a suitable storage medium. Aside from the mirror directory itself (usually under /var/spool/apt-mirror/mirror), you must have a copy of its /var/spool/apt-mirror/var/ALL file. It does not matter if the foreign mirror is not completely up to date, as Apt-Mirror will catch up with what is missing when you run the next update.
  2. Let’s prepare your local Apt-Mirror installation for grabbing Ubuntu Karmic from our preferred source first. We need to load up the /etc/apt/mirror.list file into your favourite text editor and add the entries relevant to our new repository that we are mirroring. I will use the Nano text editor for this, but you can use any text editor you like:

    $ sudo nano /etc/apt/mirror.list
  3. Now we add the entries relevant to Ubuntu Karmic for Apt-Mirror to use. In this case, I am going to update Ubuntu Karmic from Internode and I will be grabbing both the 32-bit and 64-bit versions plus the source code (reflecting what is already included in the foreign archive on my storage medium, or Apt-Mirror will be doing a LOT of downloading the next time you run it), so I need to add the following entries:

    # Ubuntu 9.10 Karmic Koala 32-bit
    deb-i386 karmic main restricted universe multiverse
    deb-i386 karmic-updates main restricted universe multiverse
    deb-i386 karmic-backports main restricted universe multiverse
    deb-i386 karmic-security main restricted universe multiverse
    deb-i386 karmic-proposed main restricted universe multiverse

    # Ubuntu 9.10 Karmic Koala 64-bit
    deb-amd64 karmic main restricted universe multiverse
    deb-amd64 karmic-updates main restricted universe multiverse
    deb-amd64 karmic-backports main restricted universe multiverse
    deb-amd64 karmic-security main restricted universe multiverse
    deb-amd64 karmic-proposed main restricted universe multiverse

    # Ubuntu 9.10 Karmic Koala Source
    deb-src karmic main restricted universe multiverse
    deb-src karmic-updates main restricted universe multiverse
    deb-src karmic-backports main restricted universe multiverse
    deb-src karmic-security main restricted universe multiverse
    deb-src karmic-proposed main restricted universe multiverse

  4. Save your changes and exit the editor using CTRL+X, then “Y” and then Enter.
  5. Make a backup copy of the foreign mirror’s /var/spool/apt-mirror/var/ALL file, so you can revert to it if you make a mistake. Call the copy something like ALL_Backup.
  6. Now open the foreign mirror’s original /var/spool/apt-mirror/var/ALL file into your favourite text editor.
  7. Use your text editor’s search and replace function (in Nano, press CTRL + Backslash “\”) to replace the URL of each entry in the foreign mirror’s ALL file to the URL of the mirror you will be performing your future updates from. In the case of changing iiNet URLs to Internode URLs, you would replace any occurrence of the text string:
  8. Once updated, save your changes and close your text editor.
  9. Now we need to merge the modified foreign ALL file into the ALL file from your local Apt-Mirror setup. First up, rename the modified foreign ALL file so we don’t confuse it. For this tutorial, I will assume your foreign mirror is supplied on an external USB hard-drive called “myhdd” and is simply a copy of the foreign system’s /var directory in its entirety. The following will rename the file from ALL to ALL_modified in a terminal:

    $ mv /media/myhdd/var/spool/apt-mirror/var/ALL /media/myhdd/var/spool/apt-mirror/var/ALL_modified
  10. Now concatenate the original ALL file and the modified foreign mirror’s ALL_modified file into one new file called ALL_new in your local Apt-Mirror’s var directory. Concatenating alone will result in duplicate lines and we need to sort the file so that any duplicate lines in both the local and foreign ALL files are brought together. We can sort the content of the concatenated files and remove duplicate lines in one hit with:$ sudo cat /var/spool/apt-mirror/var/ALL /media/myhdd/var/spool/apt-mirror/var/ALL_modified | sort | uniq > /var/spool/apt-mirror/var/ALL_new The cat part of the command simply joins the content of /var/spool/apt-mirror/var/ALL and /media/myhdd/var/spool/apt-mirror/var/ALL_modified into one big file, but before it’s written to a physical file, the concatenated data is “piped” using the pipe symbol “|” into the sort command, which sorts the concatenated data into alphabetical order which will group duplicate lines together. But before that resultant output is written anywhere, the sorted data is then piped again into the uniq command which automagically removes all duplicate lines, leaving one unique copy of each line. Finally, we direct the output from uniq using the “>” character into our physical destination file at /var/spool/apt-mirror/var/ALL_new at the end. The sudo command at the start is used simply because only the root and the apt-mirror users can actually write to the /var/spool/apt-mirror/var directory.

    Alternatively, we can replace the “| sort | uniq” part with “| sort -u” which does the exact same thing, since the sort command does have it’s own “unique” functionality as well. I’ll leave it up to you which way you’d like to go.
  11. Check your new /var/spool/apt-mirror/var/ALL_new file and you will find it now contains all your local and foreign mirror’s entries in alphabetical order and with no duplicate lines. If you’d like to see how this worked, re-work Step 10 without the sort and uniq commands or the pipe characters and see how it affects the output file. Try adding just the sort or just the uniq command too.
  12. Now rename your local mirror’s original ALL file because we’re about to replace it with the new one:

    $ sudo mv /var/spool/apt-mirror/var/ALL /var/spool/apt-mirror/var/ALL_old
  13. Now rename the new ALL_new file to take the place of the old one:

    $ sudo mv /var/spool/apt-mirror/var/ALL_new /var/spool/apt-mirror/var/ALL
  14. Right, that’s the index taken care of. We’re nearly done! Now we only have to merge the foreign mirror’s actual files into your local mirror. Once again, for the purposes of this tutorial I’m going to assume you have them stored on an external USB hard-drive called “myhdd” and is a copy of the foreign system’s entire /var directory, so the path to the foreign mirror’s files will be /media/myhdd/var/spool/apt-mirror/mirror – got that? Let’s change to that directory now in a terminal to save us having to type so much:

    $ cd /media/myhdd/var/spool/apt-mirror/mirror
  15. Now, the observant of you may have noticed that Apt-Mirror stores its mirrored files using a directory structure that follows the path of the URL the data is obtained from, so in the case of a mirror from iiNet, there is a directory here called You can see it by using the ls command to list the directory contents:

    $ ls -l
    -rw-r--r--  1 apt-mirror apt-mirror   198599 2009-12-09 10:19 access.log
    -rw-r--r--  1 apt-mirror apt-mirror   544373 2009-12-01 06:45 access.log.1
    -rw-r--r--  1 apt-mirror apt-mirror  1863467 2009-11-03 06:44 access.log.2
    -rw-r--r--  1 apt-mirror apt-mirror  1865334 2009-10-01 06:28 access.log.3
    -rw-r--r--  1 apt-mirror apt-mirror 18152891 2009-09-01 06:42 access.log.4
    -rw-r--r--  1 apt-mirror apt-mirror     6135 2009-12-09 06:46 error.log
    -rw-r--r--  1 apt-mirror apt-mirror    33898 2009-12-01 06:45 error.log.1
    -rw-r--r--  1 apt-mirror apt-mirror   124512 2009-11-03 06:44 error.log.2
    -rw-r--r--  1 apt-mirror apt-mirror   554851 2009-10-01 06:28 error.log.3
    -rw-r--r--  1 apt-mirror apt-mirror   831227 2009-09-01 06:42 error.log.4
    drwxr-xr-x  3 apt-mirror apt-mirror     4096 2008-09-11 02:00
  16. We need to modify the foreign directory names and structure to exactly match that of the URL path your local mirror updates from. Starting with the obvious, we need to rename the directory to be with:

    $ sudo mv
  17. Next we need to create an extra subdirectory called “ubuntu” because Internode’s URL path is and iiNet’s path is only:

    $ sudo mkdir
  18. Now we need to move the “dists” and “pool” directories under the first “ubuntu” directory to be under the second “ubuntu” directory:

    $ sudo mv
    $ sudo mv

  19. With the directory structure and directory names all amended, we are now ready to merge the foreign mirror’s files into your local mirror. We will do this using RSync. This tool traditionally is used to make backups and is indeed used to keep the official worldwide Ubuntu mirrors up to date 1:1 with the master archive, but in our case we are using it to add the “missing” files in the local mirror with the files from the foreign mirror whilst skipping the files that are already present, which means instead of copying around about 60GB worth of data from the foreign mirror, we’ll only copy a percentage of that instead, saving us time and drive space:

    $ sudo rsync -avz --progress /media/myhdd/var/spool/apt-mirror/mirror/ /var/spool/apt-mirror/mirror/
  20. The “–progress” parameter allows you to see which file is being copied over. You may see a large number of directory names whizz past because those directories don’t have any files that are different between your current Ubuntu Intrepid and Jaunty mirror and the Karmic mirror you are merging. Unfortunately rsync does not provide an all-over progress. It only provides a progress of the file it is currently working on. This procress can take several hours to complete depending on how much data needs to be copied and the speed of your storage medium containing the foreign mirror (which if on a USB HDD can take a looooong time).
  21. Once RSync has finished, it will give a summary of what was copied. If you were to run the rsync command in Step 16 again, you will see it finish rather quickly because there is no data that has changed or is missing anymore.
  22. Now we just quickly ensure that all the merged foreign files belong to the Apt-Mirror user with:

    $ sudo chown apt-mirror:apt-mirror -R /var/spool/apt-mirror
  23. And now we are ready to try a manual update to see if it all worked. If you now execute the Apt-Mirror application manually, you should now see that it reads in the new repository entries you added into your /etc/apt/mirror.list file in Step 3 and will compare the files presented in those indexes to your local index of downloaded files (the newly modified ALL file). It will skip all files already present and will only download new files not present in your local mirror. You should find Apt-Mirror advises only a small subset of data to download, perhaps only a few megabytes or a gigabyte or two since your last update under the old setup and depending on how old the foreign archive was. If you see that Apt-Mirror wants to download about 30GB or more, then you have made an error in changing the URL in the ALL index file or the renaming of mirror directories. Press CTRL+C to stop Apt-Mirror, and go check your configuration from Step 5.

    $ apt-mirror
    Downloading 1080 index files using 5 threads...
    Begin time: Wed Dec  9 15:59:23 2009
    [5]... [4]... [3]... [2]... [1]... [0]...
    End time: Wed Dec  9 16:00:45 2009


    1.7 GiB will be downloaded into archive.
    Downloading 998 archive files using 5 threads...
    Begin time: Wed Dec  9 16:02:31 2009
    [5]... [4]... [3]... [2]... [1]... [0]...
    End time: Wed Dec  9 16:54:15 2009

    207.4 MiB in 256 files and 1 directories can be freed.
    Run /var/spool/apt-mirror/var/ for this purpose.

  24. If all is good, then pat yourself on the back. You’ve successfully merged the foreign repository and it will now update from your preferred ISP’s mirror from now on. Smilie: :)

HowTo: Fix a missing eth0 adapter after moving Ubuntu Server from one box to another.

Scenario: You have a box running Ubuntu Server. Something happens to the box and you decide to move the hard-drive to another physical machine to get the server back up and running. The hardware is identical on the other machine, so there shouldn’t be any issues at all, right?

The machine starts up fine, but when you try and hit the network, you can’t. Closer inspection using the ifconfig command reveals that there is no “eth0″ adapter configured. Why?

Here’s how to fix it.

Ubuntu Server keeps tabs on the MAC address of the configured ethernet adapter. Unlike Ubuntu Desktop, you can’t simply change network cards willy nilly – while Ubuntu Server does detect and automatically setup new cards, it won’t automatically replace any adapter already configured as eth0 with another one, so you need to tell Ubuntu Server that you no longer need the old adapter.

This problem can also appear if you have a virtual machine such as one from Virtualbox, and you move or copy it from one host to another without ensuring that the MAC address configured for that VM’s ethernet adapter is 100% identical to the previous one.

These instructions were done with Ubuntu Server 9.04 Jaunty Jackalope in mind, but should apply to just about any release.

  1. Since you can’t SSH in, you will need to login directly on the Ubuntu Server console as an appropriate user with sudo rights.
  2. Once logged in, type in the following and hit Enter:

    $ sudo nano /etc/udev/rules.d/70-persistent-net.rules
  3. You are now presented with the Nano text editor and some info that looks similar to the following:

    # This file was automatically generated by the /lib/udev/write_net_rules
    # program, run by the persistent-net-generator.rules rules file.
    # You can modify it, as long as you keep each rule on a single
    # line, and change only the value of the NAME= key.
    # PCI device 0x8086:0x1004 (e1000)
    SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}=="0a:03:27:c2:b4:eb", ATTR{type}=="1", KERNEL=="eth*", NAME="eth0"

  4. Delete the last two lines or simply comment out the SUBSYSTEM line on the end. This is a rule defining what MAC address should be explicitly assigned to “eth0″. Since you no longer have an ethernet card with the specified MAC address in this machine (it’s on the old PC, remember), Ubuntu Server effectively ignores your new ethernet adapter because its MAC address does not match the defined rule for “eth0″.
  5. Once you’ve made your changes, press CTRL + X and then Y and then Enter to save your changes.
  6. Now reboot your box with:

    $ sudo reboot
  7. Upon reboot, Ubuntu Server will detect the “new” ethernet adapter in your PC and will automatically write a new rule into the /etc/udev/rules.d/70-persistent-net.rules file, thus enabling networking over eth0 for your server.
  8. To verify that the new adapter is working, type in:

    $ ifconfig

    …and you should see eth0 now listed with your defined IP address.
  9. Test remote connectivity to the server and if all is well, then pat yourself on the back. You’re done.

HowTo: Restore the Windows Master Boot Record (without using a Windows CD) using Ubuntu Karmic.

You know how it is – you take a client’s Windows based machine, do a dual-boot installation of Ubuntu (which replaces the Windows Master Boot Record, or MBR, with GRUB and sets up an option to boot Ubuntu or Windows) so the client can evaluate Ubuntu, but then later on for whatever reason, Ubuntu is no longer wanted. It’s removed and you need to restore the system’s ability to natively boot Windows directly without a GRUB menu.

You’re probably thinking “why the hell would anyone want to do that?!”… well, the fact of the matter is you sometimes come across a client who is just too mind-set and refuses to use anything but Windows, so yes – sometimes you need to restore the Windows MBR, but how do you do that when you don’t have a Windows CD handy?

Well, here’s how to do it using nothing but an Ubuntu 9.10 (or later) LiveCD.

It’s a little known fact that the Windows bootloader is nothing special. In fact it contains nothing proprietary to Windows at all. All the Windows bootloader does is simply look for the partition marked as “bootable” or “active” and transfer control of the boot process to it.

And would you know it? The Ubuntu LiveCD has a binary image of a generic open source bootloader that does just that!

  1. Boot your soon-to-be-Windows-only machine using the Ubuntu 9.10 (or later) LiveCD. Doesn’t matter if it’s the 32-bit or 64-bit version.
  2. Once booted on the LiveCD, open a terminal by going to the Applications menu and then choose Accessories and then Terminal.
  3. Find out what the designation of the Windows drive is (generally it will be the first drive, eg: /dev/sda or /dev/hda). If you are not sure, issue the command:

    $ sudo fdisk -l

    …and review the output, looking for your NTFS Windows partition. Make note of the drive that partition resides on (not the partition itself), eg: “/dev/sda”, not “/dev/sda1″. 
  4. Now type in the following (remembering to substitute the correct drive device name for your setup in place of “/dev/sda”):

    $ sudo dd if=/usr/lib/syslinux/mbr.bin of=/dev/sda

    …which will write the image of a standard MBR contained in the /usr/lib/syslinux directory of the LiveCD environment to the first hard-drive, overwriting GRUB.

    WARNING: Do NOT use a partition designation, eg: “sda1″ or “sda2″, etc. This will overwrite the start of that partition which will effectively destroy data. The MBR exists at the start of the drive only, so only specify “sda” with no number on the end. 
  5. Shutdown and reboot. Windows should now start “natively” without GRUB appearing at all.
  6. Normally I’d say “pat yourself on the back” here, but it’s Windows… ;-)

New server!

I finally got around to upgrading the server that serves this very page you’re reading to Ubuntu Jaunty today, up from Ubuntu Hardy. Yes I know, maybe I should have waited for Karmic, or even Lucid, but the biggest reason why I did this was that I’ve migrated this server from the little Pentium 4 Shuttle XPC that was in use before onto a Virtualbox 3.0.6 headless VM hosted on an Ubuntu Jaunty box running on top of an Intel E5200 CPU.

You’re probably wondering why I’d use an E5200 when it doesn’t have hardware virtualisation features built in? Well, the server consumes very little juice compared to the Pentium 4 (26% less in fact), it’s more powerful, it’s cheap, cheerful, produces less ambient heat, is a heck of a lot quieter and there’s loads of CPU time left over to do other things outside of the VM on the host side.

CPU wise, when the server gets really busy I’ve seen spikes as high as 50%, but it never exceeds that, so as far as I’m concerned, it’s fine. If I ever need this box to do anything more significant, I’ll upgrade the CPU to something that does have VT-x later on.

If you see anything unusual/missing/dead from today onwards, please let me know in a comment!

HowTo: Configure Ubuntu to be able to use and respond to NetBIOS hostname queries like Windows does

Users in the Windows world are very used to referencing PC’s via their NetBIOS names instead of their IP address. If your PC has a dynamic IP address (DHCP-assigned) of and its hostname (computer name) is “gordon”, Windows users can happily jump into a command line or an Explorer window and ping the name “gordon” which will magically resolve to

If your host is not configured with a Hosts file entry on your local PC or a DNS entry to associate a name with an IP address, Ubuntu can only use the IP address of that PC to communicate with it which means you have to remember what that IP address is with your feeble grey-matter in your head. Likewise, Ubuntu will not respond to a Windows PC pinging its NetBIOS name because Ubuntu does not use NetBIOS at all by default and so it will ignore such requests.

So how do we get Ubuntu to resolve NetBIOS names like Windows? And how can we allow Windows to ping Ubuntu like another Windows PC? Read on…

Let’s illustrate the problem first. You’ll need a Windows PC on your network to test this. For this article, the Ubuntu PC will be called “gordon” and the Windows PC will be called “alyx”.

On either PC, if you open a terminal or Command Line window and ping the opposing machine, eg:

$ ping alyx


C:\> ping gordon

You get an error stating that the host cannot be found. Now in the case of Windows, if you were to ping another Windows PC instead of an Ubuntu PC, you can ping its name with no problem.

Let’s sort this out, shall we?

Allowing Ubuntu to ping Windows NetBIOS names

Ubuntu is setup for Linux use, not Windows use, so we need to install a package that will allow Ubuntu to more readily mix in with Windows networks and use NetBIOS. This package is called “winbind”.

  1. Open a terminal and type in the following at the terminal prompt:

    $ sudo apt-get install winbind
  2. Once installed, we need to tell Ubuntu to use WINS (as provided by winbind) to resolve host names. Type in:

    $ sudo gedit /etc/nsswitch.conf

    …which will open the file into the Gnome Editor.
  3. Scroll down to the line that starts with “hosts:”. In Ubuntu Jaunty, it looks similar to this:

    hosts:          files mdns4_minimal [NOTFOUND=return] dns mdns4
  4. Add the word “wins” on the end of this line such that is now looks like:

    hosts:          files mdns4_minimal [NOTFOUND=return] dns mdns4 wins
  5. Save and exit the editor.
  6. Now let’s ping the name of our Windows box again.

    $ ping alyx

    …and it now resolves!
  7. Pat yourself on the back.

Allowing Windows to ping Ubuntu NetBIOS names

This is just one half of the equation. We now need to allow Windows to be able to ping Ubuntu PC’s using its NetBIOS name. This requires Ubuntu to recognise and respond to that request. We need to setup a server daemon to do this. In Ubuntu, this particular server daemon is called Samba.

  1. Installing Samba is simplicity itself. Open a terminal and type in:

    $ sudo apt-get install samba
  2. Once that has finished, your Ubuntu PC will automagically respond to all NetBIOS queries for its hostname straight away, and that’s not just from Windows machines, but other Ubuntu machines (configured with the “winbind” package) as well.
  3. Pat yourself on the back again. Smilie: :)

HowTo: Deal with BD+ copy protection when ripping Blu-ray titles using Ubuntu

A fair while back now, I wrote an article detailing how to decode Blu-ray titles using Ubuntu and an LG GGC-H20L Blu-ray optical drive.

This article detailed how to decrypt just about every movie under the sun except for a newer type of protection called “BD+” which I never got around to supplementing my original article with.

What is “BD+” protection? Well in short, it’s the deliberate corruption of random parts of the video track of the movie (well, OK – that is a highly simplified definition as BD+ protection can do a lot more than that, but the end result is the same – to prevent unauthorised playback which includes ripping). The idea BD+ is that when you rip the title, you can still watch the movie, but with some or all of the screen corrupt at various stages in the movie which well and truely ruins the movie-watching experience, especially since you paid good money for it and should not be forced to buy a dedicated consumer Blu-ray player when you’ve got a perfectly good PC that can do the same task.

But hang on, if the movie is deliberately corrupt, then how come it plays fine in a stand-alone consumer Blu-ray player or PlayStation3 console?

Well, let me tell you about that and how to get around it yourself.

I have to give credit to the movie studios for this one. It’s a simple, and annoying, method of protection. But as with anything, it was eventually reverse-engineered and broken, and neat little tools were developed to allow us consumer types to backup, or watch in our preferred way, our movies bought with our hard-earned cash.

So what’s this BD+ thing all about? Basically after the movie is mastered and just before being pressed to discs, an extra step is taken where by random parts of the movie data stream are deliberately exchanged with random data or removed altogether, thus corrupting the video stream. A record is kept, however, of what parts of the movie have been changed – a table listing where, when and what data needs to be put back into the movie stream in order to watch the movie back in its original uncorrupted format. This table is called a “conversion table”, and it is processed by your Blu-ray player while you watch the movie, with the correct data substituted back into the video stream before the image hits your screen, thus resulting in a proper uncorrupted picture.

An example of a corrupted video stream showing the BD+ Protection in full effect.
An example of the repaired video stream using the Conversion Table.

So how do we get around BD+? Well, all we have to do is follow this conversion table ourselves and correct the corrupted data as the title is decrypted.

As I showed in my previous article, the DumpHD application is brilliant and it has been extended by the author KenD00 to allow the “plugging in” of another program called the “BD VM Debugger”. What this program does is simple – it executes the Java Virtual Machine that runs the conversion table in concert with the normal decrypting process which happens when the disc is played in your normal BD player, patching up the stream as it goes. The end result is a clean decryption with no corrupt video stream.

This tutorial was written using Ubuntu Jaunty but should work with Intrepid and should definitely work with Karmic and beyond as well.

DISCLAIMER: This article describes decrypting BD titles using an Intel or AMD based PC with Ubuntu Linux. At this time of writing you cannot use Ubuntu installed on a PlayStation3 console to deal with BD+ copy protection because the BD VM Debugger and AACS Keys applications are not available for the PPC processor used by the PS3.

So let’s set this up, but first – since my last article, DumpHD has been updated to 0.61 so let’s upgrade this first. Go and download yourself a copy.

  1. Extract the archive out by either double-clicking on it or via the terminal. You should get a “dumphd-0.61″ directory.
  2. If you are upgrading from an older version of DumpHD, copy over the “KEYDB.cfg” file, overwriting the archive copy. No point losing your collection of keys accumulated thus far. Smilie: :)
  3. You’re done for this bit.

The AACSKeys program (which extracts the decryption key for the Blu-ray title and can automatically update your “KEYDB.cfg” file for you when you insert a new Blu-ray title) has also been updated to 0.4.0c since my last article, so go download yourself a copy of that as well.

  1. Extract the archive out by either double-clicking on it or via a terminal. You should get a “aacskeys-0.4.0c”.
  2. Copy the “ProcessingDeviceKeysSimple.txt” and “HostKeyCertificate.txt” into the “dumphd-0.61″ directory.
  3. Copy over the “” file located in the “/lib/linux32/” OR “/lib/linux64/” directories (depending on which architecture you’re using) to the “dumphd-0.61″ directory. Do NOT copy or create the “/lib/linux32″ or “/lib/linux64″ directories themselves. Copy the library file only.
  4. You’re done for this bit.

Right, let’s get the BD VM Debugger installed. As of this writing, the current version is 0.1.5. Go and download yourself a copy.

  1. This archive is provided as a 7zip file. Ubuntu does not have out-of-the-box support for this archive format, so install it first with:

    $ sudo apt-get install p7zip-full
  2. Once installed, extract the archive either by double-clicking on it like any normal archive, or via the terminal as follows:

    $ 7z e bdvmdbg-
  3. Copy over the everything into the “dumphd-0.61″ directory except the “changelog.txt”, “readme.txt” and “” files since you don’t really need them, but there’s no harm copying them anyway.
  4. That’s it!

You should now have a total of at least of 17 files and two directories inside the “dumphd-0.61″ directory (if you are setting up these tools for the first time, you will only have 15 files instead, as two of them  – conv_tab.bin & hash_db.bin – are generated by DumpHD in conjunction with the BD VM Debugger).

The prepared DumpHD folder with the tools we need.

Now let’s try decrypting a BD+ protected Blu-ray title. In this example, I will use the Australian release of “Day Watch”, the sequel to the Russian epic “Night Watch”.

The BD+ Protected “Day Watch” Blu-ray title I am ripping.

NOTE: Your ability to decrypt a given Blu-ray title, BD+ protected or not, will ultimately depend on the MKB version of the disc. As of this writing, DumpHD can only decrypt up to MKB version 10. Newer discs using version 11 or later can only be decrypted once suitable decryption keys are uncovered and added to the “ProcessingDeviceKeysSimple.txt” file in the “dumphd-0.61″ directory.

The obtaining of the decryption key of the Blu-ray title also requires the player authentication mechanism of your Blu-ray drive to be bypassed, or through use of a drive that deliberately does not have this feature such as some imported drives from China. In the case of my LG GGC-H20L drive, I used a modified firmware so that the drive always gave up the disc’s decryption key regardless of what player certificate I used – blacklisted or not.

  1. Start the DumpHD program by double-clicking on the “” icon. You will be asked if you want to run the script file. Click on the “Run” button.
Starting the DumpHD application.
  1. When the DumpHD GUI appears, make a note of the messages in the bottom pane to ensure that AACSKeys and the BD VM Debugger was found and loaded OK. You should see the following information:

    DumpHD 0.61 by KenD00
    Opening Key Data File… OK
    Initializing AACS… OK
    Loading aacskeys library… OK
    aacskeys library 0.4.0 by arnezami, KenD00
    Loading BDVM… OK
    BDVM 0.1.5

The DumpHD Interface
  1. Insert the Blu-ray title into your Blu-ray drive.
  2. Next to the “Source” section at the top-right of the DumpHD window is a “Browse” button. Click on it.
  3. Navigate to the path of your Blu-ray drive (generally “/media/cdrom” will work fine). and hit the OK button.Choosing a source to rip from. Click for full size.
Setting up the ripping source
  1. DumpHD will read the disc and will pass it through AACSKeys to identify the title’s descryption key. If it is successful, it will output some data about the disc in the lower pane. In the case of my Day Watch title, it shows the following:

    Initializing source…
    Disc type found: Blu-Ray BDMV
    Collecting input files…
    Source initialized Identifying disc… OK
    DiscID : 73886D08811073F45AD8C75012689097E17EBD3C
    Searching disc in key database…
    Disc found in key database

Identifying the disc and getting the decryption keys to rip with
  1. This is good. We can decrypt this. If the title is not one you have ripped before, you have the option to click on the “Title” button at the top-left of the DumpHD window to give the movie a name in your Key Database.
  2. In the “Destination” section on the right, click on the “Browse” button.
  3. Choose a place to dump the decrypted disc to. Note that most titles will dump at least 20GB worth of data and in some cases 50GB. Ensure that you have enough hard-drive space in the location you choose to dump to.
  4. We’re ready to rock and/or roll. Click on the “Dump” button and decryption will begin, automatically executing the BD VM and applying the Conversion Table to correct the deliberate corruption in the video stream. Here’s a small extract of what you will see in the lower pane of the DumpHD window:

    AACS data processed
    Initializing the BDVM… OK
    Executing the BDVM… OK
    Parsing the Conversion Table… OK
    Processing: BDMV/BACKUP/CLIPINF/00000.clpi
    Processing: BDMV/BACKUP/CLIPINF/00001.clpi
    Processing: BDMV/BACKUP/CLIPINF/00002.clpi

Beginning the ripping process
  1. And after awhile it will finish with something like:
    Processing: BDMV/STREAM/00211.m2ts
    Searching CPS Unit Key… #1
    0x0000000000 Decryption enabled
    Processing: BDMV/STREAM/00212.m2ts
    Searching CPS Unit Key… #1
    0x0000000000 Decryption enabled
    Processing: BDMV/index.bdmv
    Disc set processed

Finished decrypting the Blu-ray title.
  1. That’s it! You’ve successfully decrypted the disc and fixed up the corrupted video track. Identify and playback the actual movie M2TS file using a player like MPlayer or VLC, and you should now find that it contains no corruption whatsoever. In the case of Day Watch, the movie file was under BDMV/STREAM/00012.m2ts identifiable simply because it was the largest file in the directory. Using MPlayer, you can play this file with:

    $ mplayer -fs BDMV/STREAM/00012.mt2s

    Thankfully this title does not have the movie broken up into multiple files (I’ll be writing another article soon showing you how to deal with multi-part movies).

HowTo: Pair your Bluetooth mobile phone with Ubuntu Jaunty for file transfers etc.

Following up my previous article of how to pair your Bluetooth mobile phone with Ubuntu Intrepid, I present this updated article for pairing your mobile phone using the updated version of the Bluez Bluetooth stack and the newer and better Blueman applet for Jaunty which greatly simplifies the process of pairing Bluetooth devices and transferring files to your mobile phone.

First up, you need to follow the first 15 steps of my guide on how to seutp a Nokia N95 mobile phone as a Mobile Broadband Device because we need to update the version of the Bluez Bluetooth stack and pair your mobile phone. Once you get to step 15 where it asks about connecting the phone as a dial-up networking device, you can either choose to continue setting that up all the way through to Step 22 (after all, you might find DUN to be of genuine use to you if you’re a Mobile Internet kind of guy), or choose “Don’t connect” instead and just finish at Step 15 and continue on with this article.

Once you’re Bluetooth stack is updated and your mobile phone is paired, transfrerring files is simplicity itself:

  1. Do a left-mouse click on the Bluetooth icon in your system tray. The Bluetooth Devices window will appear showing you your available or previously paired devices. Your mobile phone will be one of them.
The Bluetooth icon in the system tray
  1. Do a right-mouse click on your mobile phone and choose “Browse” from the menu that appears (or select the mobile phone with the left-mouse button and then click on the “Browse” button in the toolbar).
Browsing the Bluetooth device

NOTE: If you get a “Could not display ‘obex://[xxxxxxx]/’.” error when trying to browse, it means that the Bluetooth connection has not re-established itself between your PC and your phone after a previous pairing (ie: “Host is down”. To fix this, click on the “Search” button in the toolbar which will “awaken” your phone’s Bluetooth awareness and then choose “Browse device” again. You should also set your PC and phone to be “trusted” or “authorised” on both sides to prevent timeouts caused by either end asking you for permission to establish the connection.

  1. If your PC is setup as being “trusted” or “authorised” on your phone, within a second or so a Nautilus window should appear showing you the content of your mobile phone, or in the case of my Nokia N95, two Windows-like folders named “C:” and “E:” which represent the phone’s internal memory and my 8GB SD card in the phone. You can browse them like any ordinary folders including copying and pasting files. An icon for the phone will also appear on the desktop (I’m using a custom icon here).
Nautilus browsing the phone contents
  1. When you have finished dealing with the files on your phone, you need to cleanly disconnect the phone and end the Bluetooth session. You can do this one of two ways. Either click on the “Eject” triangle icon next to your phone’s name in the Places list of the Nautilus window, or in the Bluetooth Devices window, do a right-mouse click and choose “Disconnect Device” from the menu.
Disconnecting from the Bluetooth phone
  1. That’s it! Happy file transfers! Smilie: :)

HowTo: Flash your BIOS without a boot floppy disk using Ubuntu

All current “IBM-Compatible” PC’s use a Basic Input/Output System also known as a BIOS. It’s a program that tells the PC how to start up when you switch it on, raises any critical faults with the system and then passes control to an operating system on a boot medium.

As time goes on, like any program, bugs are found, improvements are made, and the manufacturer of your PC’s motherboard will provide updates to the BIOS, usually supplied as a small downloadable file. Normally it is usually intended that you reboot your PC onto a DOS-compatible boot floppy disk and run the BIOS update program to install the new BIOS firmware. These days this process has been a bit simplified what with Windows users generally being able to do this from within Windows itself and even more recently, from the BIOS itself or even though starting the system on a FAT16-formatted USB stick.

This is all well and good, but what if you have an older system that cannot be flashed from Windows? What if you don’t even have Windows? What about a system that still relies on booting from a floppy disk to flash the BIOS? I don’t know about you, but I highly doubt any of the remaining floppy disks in my garage work anymore, and besides that, there’s a good chance that the floppy drive itself on older PC’s probably doesn’t work anymore.

So what can you do?

Well, we can utilise a floppy disk image that ultimately boots from your hard-drive, but acts and operates exactly like a DOS floppy disk would.


  • A boot floppy disk image. You can grab from from the FreeDOS project. FreeDOS is a compatible open source re-invention of Microsoft or IBM DOS. For our needs, we will use the 1.44MB OEM floppy which has just enough on it to boot the disk and that’s it. The filename is called FDOEM.144.gz.
  • Some free space under /boot. This won’t be a concern for most users, but some people, including myself, choose to partition off space for /boot rather than include it as part of the root filesystem partition. You will need about 2MB of space.
  • Some floppy disk image manipulation tools. We will be using MTools for the task, available in the Ubuntu repositories.
  • The new BIOS file for your motherboard.
  • The DOS-based BIOS flashing program executable.
  • OPTIONAL: Wine may be required if the BIOS file is provided as a self-extracting Windows executable. In most cases, the flashing program is usually included in the same archive.

These instructions were written with Ubuntu Jaunty in mind but should work on any version of Ubuntu.


  1. First up, download the FDOEM.144.gz file from the FreeDOS website.
  2. Extract the image file from the archive either using Ubuntu’s archive manager, or at a terminal use the command:

    $ zcat FDOEM.144.gz >dosfloppy.img
  3. Now we need to install some tools so we can manipulate the image (note that you may already have these tools installed):

    $ sudo apt-get install syslinux mtools
  4. Extract your BIOS file from the archive you downloaded from your motherboard’s manufacturer. If the file was called “”, unzip it with the following command:

    $ unzip

    NOTE: If your BIOS file is a self-extracting executable (eg: “bios123.exe”, then install WINE with:

    $ sudo apt-get install wine

    …and then execute the Windows binary via Wine with:

    $ wine bios123.exe

    …then let the self-extractor extract the files. Retrieve the BIOS file (and if available, the BIOS flashing program executable) from what was extracted. 
  5. Let’s copy the BIOS file and the flashing program onto the boot floppy image. In this example, the BIOS file is called “bios123.bin” and the flashing program is called “flash.exe”:

    $ mcopy -i dosfloppy.img bios123.bin flash.exe ::
  6. Now let’s list the contents of the floppy image to confirm that the files were copied:

    $ mdir -i dosfloppy.img ::
    Volume in drive : is FREEDOS
         Volume Serial Number is 188F-6C25
    Directory for ::/
    COMMAND  COM     66090 2003-12-10   7:49 sys      com      9221 2005-07-18  19:58
    AUTOEXEC BAT        67 2004-02-22  10:16 CONFIG   SYS        52 2004-02-22  10:17
    README            1486 2004-02-22  12:50 BIOS123  BIN   1048576 2009-08-11  22:34
    FLASH    EXE     26351 2009-08-11  22:34
    7 files           1 151 843 bytes 258 048 bytes free

  7. The floppy disk is ready! Now to set it up so we can boot it.

    $ sudo mkdir /boot/biosflash
    $ sudo cp dosfloppy.img /usr/lib/syslinux/memdisk /boot/biosflash/

  8. Now we need to make an entry in the GRUB boot menu for it so we can choose it as a boot option when we start the PC. First open the GRUB menu.lst file in your favourite editor:

    $ sudo gedit /boot/grub/menu.lst
  9. Scroll right down to the very bottom of the file and add the following lines:

    title Boot floppy for BIOS flashing
    kernel /boot/biosflash/memdisk
    initrd /boot/biosflash/dosfloppy.img boot

    NOTE: If your /boot directory is on its own partition (like how I have it on my own system), you need to omit the “/boot” bit from all lines above, thus:

    title Boot floppy for BIOS flashing
    kernel /biosflash/memdisk initrd /biosflash/dosfloppy.img

  10. Save your changes and quit the editor.
  11. You are now ready to boot! Shutdown and restart your system. When your GRUB menu appears, you will see an entry called “BIOS floppy for BIOS flashing” at the bottom of the menu. Select it and you should very quickly be presented with the familiar A:\> prompt. You can now launch your BIOS flashing program and flash your BIOS!
  12. When you are done with the floppy environment, just press CTRL + ALT + DEL to reset your PC (or after a BIOS flash you should ideally physically switch off and then back on instead).

You’re done! Smilie: :)

HowTo: Make use of Ubuntu PPA repositories

What is a PPA repository?

A PPA is a Personal Package Archive hosted by the servers that contains binaries and/or source related to a project. The project can be anything from a new application to a backport of an existing one. A good example is the easy availability of 3.0.1 to Intrepid users before Jaunty came out rather than having to deal with the mess of packages from Sun’s own website.

PPA’s can be wholly personal to you or may be open to the public. In particular it is very useful for providing Ubuntu packaged versions of a given application instead of dealing with tarballs or converting RPM packages.

So how does one make use of a pre-defined PPA and are there any things to be wary about? Read on.

PPA’s are great for getting the latest versions of a given piece of software instead of waiting for the official Ubuntu repository versions to be updated, though you will of course open yourself to any potential bugs in that software. Classic examples include getting Pidgin’s 2.5.8 update with the included new authentication method for Yahoo Messenger, or getting the latest version of the Deluge BitTorrent client/daemon.

In this HowTo, we are going to grab the latest version of Deluge to install on an Ubuntu Jaunty 9.04 system using the unofficial Ubuntu PPA, but this guide should apply to just about any version of Ubuntu or other Debian-based distribution.

NOTE: Ubuntu has now further simplified the PPA process by introducing a new way of adding PPA repositories and the GPG key in one hit from Ubuntu Karmic 9.10 onwards. While you can still use the process outlined below, please see the note at the end of this article for the simplified way.

  1. First up, we need to create a sources.list file for the PPA repository we want to add to our system. In the case of Deluge, the PPA is at so go there first.
  2. Under the “Install packages” section is a box with two lines in it:

    deb jaunty main
    deb-src jaunty main

  3. Highlight and copy these two lines to your clipboard.
  4. Now open a terminal and create a new file in your favourite text editor (in this case, GEdit) by typing in:

    $ sudo gedit /etc/apt/sources.list.d/deluge.list
  5. This creates a new file under /etc/apt/sources.list.d called deluge.list. You are presented with a blank page. Paste the content of the clipboard down so you have the two lines you copied earlier.
  6. Save and close the file.
  7. Now, in the terminal, type in:

    $ sudo apt-get update
  8. When the update completes, you should see a warning error at the end similar to the following:

    W: GPG error: jaunty Release: The following signatures couldn't be verified because the public key is not available: NO_PUBKEY C5E6A5ED249AD24C
    W: You may want to run apt-get update to correct these problems

  9. What this means is that apt-get has processed all your package lists and found that for the newly added Deluge source list, it does not have a GPG key to authenticate any of the files from it. This doesn’t stop you from installing Deluge or other files from it, but it does prevent Ubuntu from proving whether or not these files are untampered with, so it will pester you with warnings until you can provide it that GPG public key. Providing it is simple. Make note of the hexadecimal value provided after NO_PUBKEY and then type in the following:

    $ sudo apt-key adv --keyserver --recv-keys C5E6A5ED249AD24C
    Executing: gpg --ignore-time-conflict --no-options --no-default-keyring --secret-keyring /etc/apt/secring.gpg --trustdb-name /etc/apt/trustdb.gpg --keyring /etc/apt/trusted.gpg --keyserver --recv-keys C5E6A5ED249AD24C
    gpg: requesting key 249AD24C from hkp server
    gpg: key 249AD24C: public key "Launchpad PPA for Deluge Team" imported
    gpg: Total number processed: 1
    gpg:               imported: 1  (RSA: 1)

  10. This fetches the GPG public key from the Keyserver at and adds it to your GPG keyring. Now if you run:

    $ sudo apt-get update

    …again, you will see no errors output this time, which means you can safely install applications from it now without Apt warning you about being unable to authenticate them.
  11. To prove this, let’s try and install Deluge now:

    $ sudo apt-get install deluge

    …and it should install like any other ordinary Ubuntu applications with no fuss, no worries and no error messages.
  12. Pat yourself on the back.

I get an “HTTP fetch” error when I try to import a GPG public key!

If at step 9, you get the following error:

$ sudo apt-key adv --keyserver --recv-keys C5E6A5ED249AD24C
Executing: gpg --ignore-time-conflict --no-options --no-default-keyring --secret-keyring /etc/apt/secring.gpg --trustdb-name /etc/apt/trustdb.gpg --keyring /etc/apt/trusted.gpg --keyserver --recv-keys C5E6A5ED249AD24C
gpg: requesting key 249AD24C from hkp server
gpgkeys: HTTP fetch error 7: couldn't connect to host
gpg: no valid OpenPGP data found.
gpg: Total number processed: 0

It is because your firewall is blocking access to the Keyserver. Keyservers use port 11371 to communicate, not port 80 which is the normal HTTP port, so open 11371 as an outbound port on your firewall and re-run the command and it will work fine.

The simplified way of adding PPA’s using Ubuntu Karmic 9.10 or later.

Ubuntu 9.10 introduced a new, simpler way to add PPA’s to your system. Using the above Deluge example, you now only have to type in:

$ sudo add-apt-repository ppa:deluge-team/ppa

…and that’s it. This will do the whole sources.list creation and GPG key for you in one hit. That now simply leaves you to update your package lists with:

$ sudo apt-get update

…and then you can install Deluge with:

$ sudo apt-get install deluge

As you can see, it’s a far simpler method. You can, of course, still use the original method if you prefer.

Hey these PPA things are cool – can I create one of my own?

You certainly can. Refer to the Personal Package Archives for Ubuntu Help Page for everything you need to know, however please do not use a PPA as your own personal off-site backup for personal data. It is intended to help individuals and small groups who develop new software and do not have the resources to host their software for easy distribution by providing them a place where the masses can gain access to their project. To help curb the potential for abuse, PPA’s are limited to 1GB of storage and you are bound by the Ubuntu Community Code of Conduct.