Monday, 18 May 2009

HOWTO: Using apt-cacher-ng to cache packages

If you have a lot of machines to manage, you don't want them all reaching out to the internet to retrieve updates and using up your precious bandwidth. What you want is a local machine that can keep copies of all the files that your machines download from the repositories so that the next machine that asks for the same file can just be given the copy that has already been downloaded. You could of course use squid, but squid is a less than perfect solution as it is not designed to ensure files are kept on hand until they are no longer needed.

apt-cacher-ng is a purpose built application that keeps track of all the packages that have been downloaded. If a client requests some-package.v123.deb it checks to see if it is already cached locally and if so it provides it to the client.

If the package has not previously been requested, it then downloads the package, provides it to the client, adds it to the cache and, here is the important part, deletes all older, outdated versions of that file!

It will keep cached files indefinitely, unlike squid which is designed to flush files if they remain unrequested for a predefined period.

So, to use apt-cacher-ng, your /etc/apt/sources file doesn't need to be modified or configured in any special way. Just leave it configured as you would normally (in my case these are the official aus repos with the source repos removed);


deb lucid main restricted
deb lucid-updates main restricted
deb lucid-backports main restricted
deb lucid-backports universe multiverse
deb lucid universe multiverse
deb lucid-updates universe multiverse
deb lucid-security main restricted
deb lucid-security universe multiverse

To install apt-cacher-ng, simply do;

sudo apt-get install apt-cacher-ng

And that is it, there is no further configuration needed!

For each client however, you do need to make a few small changes;

Create (or edit) the file /etc/apt/apt.conf and add the following line (substituting the IP address for the address of your server of course!)

Acquire::http { Proxy "http:"; };

If you use Synaptic, you also need to modify another file;

sudo vi /root/.synaptic/synaptic.conf

Add the following lines to the end but before the final brace (ie before the " }; ')

useProxy "1";
httpProxy "";
httpProxyPort "3142";
ftpProxy "";
ftpProxyPort "3142";
noProxy "";

For the changes to be applied you need to do an apt-get update;

sudo apt-get update

If this completes without error you are done! It is that easy.

Thursday, 14 May 2009

HOWTO: Deluge on a Headless Server

Install Deluge on the server;

sudo apt-get install deluged deluge-console deluge-web

Before we start the deluge daemon, we will tell it which user(s) can connect. To do this we add a "user" to the authentication file;

mkdir -p ~.config/deluge
echo "username:password" >> ~/.config/deluge/auth

The username and password can be anything you like, they do not have to correspond with a username and password combo from UNIX userland.

Now we can start the deluge daemon;


By default connections from remote hosts are disabled. We need to enable them using the deluge console.

Load the Console UI;


Entering the following command into the console;

config -s allow_remote True

Exit the Console UI with the 'exit' command;


Open your deluge client on your workstation and open the 'Preferences' dialog. Disable 'Classic mode' on the 'Interface' page.

Restart the client and you should now be able to use the GTK Deluge client to connect to your Deluge server using the 'Connection Manager' dialog.

To start the web process;

deluge-web &

This small script will start both process's and then confirm that they have started using 'ps'

deluge-web &
ps ax | grep deluge | grep -v grep

You can now browse to your server on port 8112;


The default password is 'deluge'. You should change this of course.

You do that using the WebUI in 'Preferences > Interface'