Monday, 14 December 2009

Blank screen when logging in to VMware server

I use vmware server a lot and unfortunately this happens way too frequently. I'm not sure if the bug lies in Firefox or VMware but there should be a username and password entry box in the middle of that screenshot.

So far I have not found a cure for it.

I do have workaround however, but it's not a pretty one.

Basically, what you have to do is hit ctrl+shift+r quickly and repeatedly until the dialog appears. Stop every half a dozen or so presses to let it catch up and hopefully the dialog will appear. If not, rinse, lather, repeat until it does.

Hardly an elegant solution I know, but at least you will be able to log into your console.

P.S. Performing the same procedure using F5 does not produce the same result (well, not for me anyway)

Sunday, 6 December 2009

Inheriting group ownership for shared files

Use the SGID attribute to allow users to create files that can be opened by other users in their group.

When the SGID (Set Group Identification) attribute is set on a directory, files created in that directory inherit its group ownership. If the SGID is not set the file's group ownership will be set to the user's default group.

To set the SGID on a directory or to remove it, use the following commands:
chmod g+s directory
chmod g-s directory

When set, the SGID attribute is represented by the letter "s" which replaces the "x" in the group permissions:
ls -l public
drwxrwsr-x 10 brett users 4096 2009-012-10 17:40 public

Wednesday, 11 November 2009

Sort du output in readable format.

UPDATE: Download script from here

I got sick of typing "du -h --max-depth=1" all the time, and even then the output is not sorted. Unfortunately you cant use the sort util with the human readable output provided by du.

I found the awk part of this shell script in a forum somewhere and it works a treat.

sudo du -k --max-depth=1 $1 | sort -nr | awk '
     BEGIN {
        split("KB,MB,GB,TB", Units, ",");
        u = 1;
        while ($1 >= 1024) {
           $1 = $1 / 1024;
           u += 1
        $1 = sprintf("%.1f %s", $1, Units[u]);
        print $0;

This produces output like this;

884.0 MB .
385.5 MB ./Downloads
528.0 KB ./eBooks
48.0 KB ./My Shapes
32.0 KB ./1984_files
8.0 KB ./Xerox
8.0 KB ./My Pictures

Also, to avoid having to enter your password (for the sudo command) add this line to your sudoers file;

brettg ALL=NOPASSWD: /usr/bin/du

Tuesday, 27 October 2009

Search files for text and delete

I had to do this when a mail queue ended up with hundreds of bounce messages. I needed to delete these messages and keep the rest.

rm `fgrep -lir searchstring *`

Saturday, 24 October 2009

Preparing for Karmic

With less than a week to go until Karmic goes live you might want to avoid the rush and pre-download the packages you will need for when you decide to upgrade.

Do this by using the "download only" flag in apt-get

First, change your sources.list file to point to the Karmic repositories.

Next, do a "download only" upgrade.
sudo apt-get -d upgrade

At the end of the process, instead of stepping to the "Installing packages" stage you will see a message "Download complete and in download only mode"


Finally, do the same for a distribution upgrade
sudo apt-get -d dist-upgrade

If you have more than one PC on your network to be upgraded, you probably want to use apt-cacher so that you only need to download all those packages once.

Come the time to do the upgrade proper, repeat these steps (simply to update any packages that have changed since you downloaded them into your cache).

Then, when you do the upgrade for real, it will skip past the normally lengthy download stage and straight to the "installing packages" phase while all the other plebs are fighting over bandwidth trying to download packages on the release day!

Wednesday, 21 October 2009

Copy files to /dev/null

I have a suspected hard disk problem. What happens is when I do a large amount of disk reading (as in performing a backup) somewhere along the line the system freezes and needs to be reset. System logs show nothing useful.

So, what I want to do is basically read a whole bunch of files and directories and see if the failure occurs on a specific file (or files)

This is the first command I used;
find . -type f -print -exec sh -c 'cat "$1" >/dev/null' {} {} \; > readtest&

Basically it does a find for all files and then does a 'cat' of each file to /dev/null and finally appends the console output for that action to a file called "readtest".

The idea was that if it fails on a file I should be able to consult the "readtest" file which will tell me the last file which successfully copied.

You can watch the console output 'live' by using the tail command
tail -f readtest

The problem was that the 'find' command doesn't find files in alphabetical order so it is difficult to identify what the next file it was copying would be, so I modified my procedure.

First I created a small shell script called 'testfile'
echo "Testing $1"
cat "$1" >/dev/null

Made it executable with
chmod +x testfile

then reran a slightly modified version of the above command;
find . -type f -print -exec sh -c './testfile "$1"' {} {} \; > readtest&

This will printout the file it is about to test beforehand so that if the system locks up during the reading of a file, I can consult the readtest log to see which file it was reading.

Tuesday, 20 October 2009

Joining Video Files

Sometimes I want to join together two Video files.

Firstly, we need to have mencoder installed.
sudo apt-get install mencoder mplayer

Now, lets assume that we have two avi files called f1.avi and f2.avi.

The first step is to join the files together;
cat f1.avi f2.avi > f1f2.avi 

If you have more than two files simply include them all (in the correct order of course)
cat f1.avi f2.avi f3.avi > f1f2f3.avi

Next, we need to ensure that the audio syncing has not been messed up;
mencoder -forceidx -oac copy -ovc copy f1f2.avi -o final.avi

Wednesday, 7 October 2009

Run a script at user login

When a user opens a bash shell, there is a script that runs (~/.bashrc) which configures the shell with the users preferences.

Sometimes you want to run a similar sort of script when a user logs in to the gnome desktop as well.

To do this you need to create a .desktop file and place it in ~/.config/autostart. Here is an example;

[Desktop Entry]
Comment=Testing autostart

You also need to place an executable script (in this case /home/brettg/ that will hold the commands you want to execute.

More details here and here

Monday, 5 October 2009

DHCP and DNS using DNSMasq

If you are not configuring an internet facing nameserver to resolve your own FQDN then you really don't want to use ISC BIND.

In cases where you want to simple domain in a home or office with caching and dhcp then the by far the simplest tool to set up is dnsmasq.

sudo apt-get install dnsmasq

Once installed you can configure it by editing dnsmasq.conf

sudo vi /etc/dnsmasq.conf

The config file is very well documented and should be self explanatory. For a simple DNS setup you probably want to modify the following two lines;


For server use your upstream (usually your ISP) dns server.

For local use the domain you want to use on your LAN.

To use DHCP modify these lines;

You can do some other neat stuff too, like assign static addresses by hostname or mac address, specifiy specific servers for specific domains and other geeky fun things. Most of it is documented in the config file, I encourage you to read through it.

If you put IP and name entries into /etc/hosts then dnsmasq will use that to resolve names and pass them on to clients, easy peazy!

Tuesday, 29 September 2009

Mounting Samba shares

Install smbfs;

sudo apt-get install smbfs

Put something like this in your fstab;

//ntserver/share /mnt/samba smbfs username=myusername,password=mypassword 0 0

Mount the drive;

sudo mount /mnt/samba

Your drive should now be accessible at /mnt/samba

Friday, 25 September 2009

HOWTO: Setting up a vpn using ssh and pppd

For this we need two systems, one is designated as the server, the other the client. The server needs to have a static IP address or dynamic dns name from a free service such as Also, ensure that all firewalls are turned off or port 22 forwarding is enabled for both hosts.

Configuring the SSH accounts;

On the "server" machine;

Firstly, if you have not already installed ssh server do so now.
sudo apt-get install openssh-server

I use port 443 for VPN connections because this is usually the easiest port to get through a firewall that you don't control.

Edit your ssh server config;

sudo vi /etc/ssh/sshd_config

Change the line;

Port 22


Port 443

and restart your SSH server;

sudo /etc/init.d/ssh restart

Now, we create a user called "vpn";

sudo adduser --system --group vpn

The --system parameter sets vpn's shell to /bin/false but because the vpn user needs to log in via ssh, we must change this to /bin/bash in the /etc/passwd file.

sudo vi /etc/passwd

Here is an example;


The account password will only be used during this howto. You can choose a complex (secure) one now or a simpler temporary one and change it later.

Creating a password;

sudo passwd vpn

You should be able to login to the account from the client now;

ssh vpn@hostname

The next step is to create a ssh keypair for the root user on the client machine and place that public key in the vpn users authorized_keys file. Use this guide to configure passwordless ssh but remember to use the vpn user on the server instead of the root user as is shown in that guide.

Once you have passwordless SSH properly configured between root@client and vpn@server you should change the password to a more secure (random) one if you haven't already done so it will no longer be needed.

Time to set up the actual VPN.

Configuring the VPN;

The pppd daemon we will use needs to run as root, but we don't want to give our vpn user complete access to the system. To do that we configure sudo to give minimal access rights.

On the Server, open the visudo editor


Add these three lines to the end of the file

vpn ALL=NOPASSWD: /usr/sbin/pppd
vpn ALL=NOPASSWD: /sbin/iptables
vpn ALL=NOPASSWD: /sbin/route

This allows our vpn user to execute the pppd command to start the vpn and use the "route" command to set the return routes (if required).
If you are setting up a router<->router connection you will need to set the appropriate return routes to the client on the server.

To do this, create a script in the vpn user directory on the server.

vi /home/vpn/

Place the appropriate route commands to the subnet(s) at the clients end. If you don't want return routes then just don't enter any route commands. Here is mine;

sudo route add -net gw
sudo route add -net gw

This script must be executable

chmod +x /home/vpn/

and owned by the vpn user

chown vpn:vpn /home/vpn/

We can also check that the pppd permissions are set up properly by logging in as the vpn user and issuing this command;

sudo /usr/sbin/pppd noauth

You should see a bunch of hieroglyphics such as this.

~�}#�!}!}!} }4}"}&} } } } }%}&����}'}"}(}"��~

You can kill the process from another terminal or just wait 30 secs or so for it to finish on its own.

Now we can configure the client (logged in as root)
Firstly, we need to use a script to connect to the server. You can locate the script anywhere you like, I put it in /usr/local/bin

You can download a copy of the connect script or simply
copy and paste this text into a file

# SCRIPT: vpn-connect version 2.2
# LOCATION: /usr/local/bin/vpn-connect
# DESCRIPTION: This script initiates a ppp-ssh vpn connection.
# see the VPN PPP-SSH HOWTO on
# for more information.
# NOTES: This script uses port 443 so your VPN server should be
# configured to listen for ssh on Port 443
# revision history:
# 1.6 11-Nov-1996
# 1.7 20-Dec-1999
# 2.0 16-May-2001
# 2.2 27-Sep-2009
# You will need to change these variables...
# The host name or IP address of the SSH server that we are
# sending the connection request to:

# The username on the VPN server that will run the tunnel.
# For security reasons, this should NOT be root. (Any user
# that can use PPP can intitiate the connection on the client)

# The VPN network interface on the server should use this address:

# ...and on the client, this address:

# This tells ssh to use unprivileged high ports, even though it's
# running as root. This way, you don't have to punch custom holes
# through your firewall.


## required commands...


if ! test -f $PPPD ; then echo "can't find $PPPD"; exit 3; fi
if ! test -f $SSH ; then echo "can't find $SSH"; exit 4; fi

case "$1" in
# echo -n "Starting vpn to $SERVER_HOSTNAME: "
${PPPD} updetach noauth passive pty "${SSH} -p 443 ${LOCAL_SSH_OPTS} ${SERVER_HOSTNAME} -l${SERVER_USERNAME} -o Batchmode=yes sudo ${PPPD} nodetach notty noauth" ipparam vpn ${CLIENT_IFIPADDR}:${SERVER_IFIPADDR}
route add -net netmask gw $SERVER_IFIPADDR
# route add -net netmask gw $SERVER_IFIPADDR
# route add -net gw $SERVER_IFIPADDR

# echo -n "Stopping vpn to $SERVER_HOSTNAME: "
PID=`ps ax | grep "${SSH} -p 443 ${LOCAL_SSH_OPTS} ${SERVER_HOSTNAME} -l${SERVER_USERNAME} -o" | grep -v ' passive ' | grep -v 'grep ' | awk '{print $1}'`
if [ "${PID}" != "" ]; then
kill $PID
echo "disconnected."
echo "Failed to find PID for the connection"


echo "Usage: vpn {start|stop|config}"
exit 1
exit 0

You need to change the SERVER_HOSTNAME variable in the above script. You may also need to change SERVER_IFIPADDR and CLIENT_IFIPADDR depending on your existing network landscape.

Now we need to make the script executable

chmod +x /usr/local/bin/vpn-client

To start the vpn, at the client type

/usr/local/sbin/vpn-client start

You can check if it is up using the "ifconfig" command

ifconfig ppp0

Note: if you already have a ppp connection, such as to your ISP, then you may need to do "ifconfig ppp1". To see all your current ppp connections enter

ifconfig | grep ppp

If you want the vpn connection to be permanently up you can create a script to check the status and restart it if required.

vi /usr/local/sbin/vpn-check



if [ "$(/bin/pidof $DAEMON)" = "" ]; then
/usr/local/sbin/vpn-client start
if ! [ "$(/bin/pidof $DAEMON)" = "" ]; then
echo "VPN restarted $(date +%m-%d-%Y)"

Now, add an entry to the system crontab to run the script every minute

vi /etc/crontab

Add this line

* * * * * root /usr/local/sbin/vpn-check

Cron will automatically restart so we don't need to do that.

Now, assuming all has gone well if you issue the command

/usr/local/sbin/vpn-client stop

and wait for about a minute the vpn client should automatically reconnect!


Mixed Systems

I have a system that requires the 2.6.3x kernel because it is the only one that supports the motherboards onboard NIC.

I tried it with karmic alpha but unsurprisingly I ran into stability problems so I am going back to Jaunty.

However, I still need to run the newer kernel. I could of course compile it myself but I really prefer not to. I could manually download the deb and then install it and all it's dependencies myself but that would probably cause as many instability problems as just nursing Karmic along.

So, a mixed system it is.

On Debian systems, this is simply done by adding "APT::Default-Release "version";" to your apt.conf file and sticking "stable". "testing" or whatever in as the "version".

We can't do this on Ubuntu because they use a different naming system to Debian. If we were to set "version" to "jaunty" then we would no longer receive security updates because in Ubuntu Land these come from ubuntu-security instead.

Fortunately, there is a way around this and here it is.

Firstly, we copy our existing "jaunty" sources.list to /etc/apt/sources.list.d
sudo cp /etc/apt/sources.list /etc/apt/sources.list.d

Next, we change it to point to the "karmic" repositories.
sudo sed -i 's/jaunty/karmic/g' /etc/apt/sources.list.d/sources.list

Create a Ubuntu style preferences file
sudo vi /etc/apt/preferences

Here is the contents of my preferences file, it should be fairly self explanatory;
Package: *
Pin: release a=jaunty
Pin-Priority: 900

Package: *
Pin: release a=karmic
Pin-Priority: 500

Package: *
Pin: release a=jaunty-updates
Pin-Priority: 900

Package: *
Pin: release a=karmic-updates
Pin-Priority: 500

Package: *
Pin: release a=jaunty-backports
Pin-Priority: 900

Package: *
Pin: release a=karmic-backports
Pin-Priority: 500

Package: *
Pin: release a=jaunty-security
Pin-Priority: 900

Package: *
Pin: release a=karmic-security
Pin-Priority: 500

Package: *
Pin: release a=jaunty-proposed
Pin-Priority: 900

Package: *
Pin: release a=karmic-proposed
Pin-Priority: 500

Finally, we update aptitude
sudo apt-get update

To install the kernel package from the karmic repository we first need to know what version we want
apt-cache search linux-image

The one I'm interested in is "linux-image-2.6.31-10-386"
I can just install this kernel using its full name
sudo apt-get install linux-image-2.6.31-10-386

To install packages from the Karmic repository I can use the -t distribution parameter
sudo apt-get -t karmic install packagename

Or I can specify a package version
apt-get install nautilus=2.2.4-1

Wednesday, 23 September 2009

Graphing router traffic with MRTG

Systems tested; Ubuntu Hardy

NET-SNMP configured on each target (Ubuntu,FreeBSD)

Install required packages
sudo apt-get install apache22 gcc make g++

Install MRTG
sudo apt-get install mrtg

Edit the mrtg configuration file
sudo vi  /etc/mrtg.cfg

# Global Settings

RunAsDaemon: yes
EnableIPv6: no
WorkDir: /var/www/mrtg
Options[_]: bits,growright
WriteExpires: Yes

Title[^]: Traffic Analysis for

Now we use cfgmaker to create a profile for each snmp target
sudo :q!cfgmaker public@targetname > /etc/mrtg.cfg

Finally, we create an index file for the webserver using indexmaker
sudo indexmaker /etc/mrtg.cfg > /var/www/mrtg/index.html

You can take a look at the graphs by pointing your browser to


Setting up SNMP

Systems tested; Ubuntu Hardy

install packages
apt-get install snmp snmpd

I'm not sure why there are two config files but there are and we have to edit both. The first file configures the daemon itself
vi /etc/default/snmpd

Ensure these two lines exist
SNMPDOPTS='-Lsd -Lf /dev/null -u snmp -I -smux -p /var/run/'

Change the localhost address on the 2nd line to the address of the interface you will listen on (remove it completely to listen on all interfaces)
The second file contains the snmp details such as access rights & community names
vi /etc/snmp/snmpd.conf

Find this section;
#  source          community
com2sec paranoid default public
#com2sec readonly default public
#com2sec readwrite default private

and change it to;
#  source          community
#com2sec paranoid default public
com2sec readonly default public
#com2sec readwrite default private

NOTE: Using the default community "public" is not recommended for security reasons. You should change it to a custom community name. It is left as default for simplicities sake. To change it just comment out all lines and add a new one. For example;
#  source          community
#com2sec paranoid default public
#com2sec readonly default public
#com2sec readwrite default private
com2sec readonly default MyCommunity

Checking your configuration from the local host
snmpwalk -Os -c public -v 1 localhost system

This should return a bunch of lines relating to various parts of your system. You can execute the same command from another host (snmp package is required), changing "localhost" to the name of the system.


Friday, 11 September 2009

Installing Request Tracker in Hardy LTS

Tested in Ubuntu Server 8.04 LTS

Your system should be able to send and receive email
You should have the universe repo enabled in /etc/apt/sources.list

Installing the packages;
sudo apt-get install request-tracker3.6 rt3.6-apache2 \
rt3.6-clients mysql-server apache2

The mysql-server installer will ask you to create a root password during the package configuration process. Don't forget to jot it down!

Configuring Request Tracker;

The RT configuration file located at /etc/request-tracker3.6/
vi /etc/request-tracker3.6/

Make the appropriate changes to the config file for email address, domain, database and timezone for your particular network.

Add the following two lines to the database section.
Set($DatabaseHost , ‘localhost’);
Set($DatabaseRTHost , ‘localhost’);

Here is a sample config;
Set($rtname, '');
Set($Organization, 'example');

Set($CorrespondAddress , '');
Set($CommentAddress , '');

Set($Timezone , 'Australia/Melbourne'); # obviously choose what suits you


Set($DatabaseType, 'mysql'); # e.g. Pg or mysql

# These are the settings we used above when creating the RT database,
# you MUST set these to what you chose in the section above.

Set($DatabaseUser , 'rtadmin');
Set($DatabasePassword , 'password');
Set($DatabaseName , 'rtdb');
Set($DatabaseHost , 'localhost');
Set($DatabaseRTHost , 'localhost');


Set($WebPath , "/rt");
Set($WebBaseURL , "http://support");


Configure the database

Log in to the db (you did write down the root password, right?)
mysql -u root -p

Create a database user

Exit mysql

Configure the database;
sudo /usr/sbin/rt-setup-database-3.6 --action init --dba rtadmin --prompt-for-dba-password

This is what you should see;
Now creating a database for RT.
Creating mysql database rtdb.
Now populating database schema.
Creating database schema.
Done setting up database schema.
Now inserting database ACLs
Done setting up database ACLs.
Now inserting RT core system objects
Checking for existing system user...not found. This ap perlpears to be a new installation.
Creating system user...done.
Now inserting RT data
Creating Superuser ACL...done.
Creating groups...
Creating users...10.12.done.
Creating queues...1.2.done.
Creating ACL...2.3.done.
Creating ScripActions...
Creating ScripConditions...
Creating templates...
Creating scrips...
Creating predefined searches...1.2.3.done.
Done setting up database content.

Enable the apache2 RewriteEngine
sudo ln -s /etc/apache2/mods-available/rewrite.load /etc/apache2/mods-enabled/

Enable the apache perl module
sudo a2enmod perl

Reload apache
sudo /etc/init.d/apache2 force-reload

The Request Tracker login screen can be found at;

Thursday, 10 September 2009

Making apps start at boot

Sometimes you install an application which doesn't configure itself to automatically start and stop with the OS.

To make it start you need to do two things.

Firstly, we need to make a symlink to the app in /etc/init.d
ln -s /usr/local/myapp/bin/myapp /etc/init.d/myapp

Secondly, we need to update the system startup and shutdown scripts using update-rc.d
update-rc.d myapp start 20 2 3 4 5 . stop 20 0 1 6 . 

You should see the following output.
Adding system startup for /etc/init.d/myapp ...
/etc/rc0.d/K20myapp -> ../init.d/myapp
/etc/rc1.d/K20myapp -> ../init.d/myapp
/etc/rc6.d/K20myapp -> ../init.d/myapp
/etc/rc2.d/S20myapp -> ../init.d/myapp
/etc/rc3.d/S20myapp -> ../init.d/myapp
/etc/rc4.d/S20myapp -> ../init.d/myapp
/etc/rc5.d/S20myapp -> ../init.d/myapp

The numbers in the update-rc command correspond to the start/stop position of the app during the boot process and the run levels to apply the command to.

To remove them again type
update-rc.d -f myapp remove

Wednesday, 2 September 2009

Background and Foreground processess

I work a lot of the time in a console. Sometimes I will start a long running process and want it to run as a background process. There are a coupla ways to do this.

1) Add an ampersand ("&") to the end of the command line.
$ bgcommand &

2) Use nohup
$ nohup bgcommand

However, if you are anything like me, you will constantly forget to do this so in such cases you will want to move your process to a background process.

We can do this using CTRL+z to suspend the command and then "bg" to move it to the background.
$ bg

Now your command continues executing but you are able to enter new commands on the console.

However, if you close or lose that console for whatever reason then the background process will be killed. To avoid this you need to detach the process using the disown command.
$ disown -a


Thursday, 27 August 2009

Find and replace with sed

I use this when switching distributions, and I need to add the new distro name to sources.list
sudo sed -i 's/jaunty/karmic/g' /etc/apt/sources.list

Wednesday, 19 August 2009

DNS Hijacking, filtering and OpenDNS

With witless clowns like Senator Stephen Conroy pushing for draconian mandatory net filtering these days smart people should consider using a service such as OpenDNS rather than the DNS service provided by their ISP.

Simply put and in your /etc/hosts file and you are good to go.

However you should note that OpenDNS use "services" such as redirecting "domain not found" errors to a search page to fund their operations rather than letting your browser display the appropriate error as it should do. This also effects things such as ping. If I ping a domain name that does not exist I should get a response "unknown host", whereas with OpenDNS it will resolve to the OpenDNS page and the ping will receive a reply as if the nonexistent domain actually exists.

Even if you don't use OpenDNS, more and more ISP's these days have also taken to hijacking invalid domain requests and sending the standard "domain not found" error to their own (partner) advertisment pages.

There are a few ways to mitigate this behaviour. The easiest is to put the following line in your /etc/hosts file

This will cause the redirection to go to your locahost adaptor. If you are running a service (ie web server) on port 80 then it will resolve to its default page and it won't solve the successful ping to bogus domain problem described above. This is a less than perfect solution.

The best solution is to use dnsmasq on your gateway. Dnsmasq is a combined DHCP and DNS server and is easy to set up.

Once you have it set up, simply put the ipaddress that is returned from a bogus ping into your /etc/dnsmasq.conf file. In my case I have;

and normal service will be resumed!

Wednesday, 12 August 2009

HOWTO: Passwordless SSH using a public key

If you find yourself logging in to machines regularly or you want to include ssh commands in a script, for example using rsync to backup then you don't want to have to enter a password every time. In such cases you can use a public key.

The first thing we need to do is create a ssh key pair on the client host. Make sure that you login as the user who will be connecting to the server. In this case I am using the root user.
Warning: If your user already has a key pair then you should skip this step otherwise you may overwrite your existing key and potentially cause problems for other services that may already rely on them.

First, we should check whether there is already a keypair for our user;

ls -al ~/.ssh/

If there are files id_rsa and (or similar) listed then you already have a keypair and you should skip this step.

Creating an ssh key pair (press [enter] for each question asked);

Note: It is important that you don't enter a passphrase when asked to! If you did just run the command again, it will overwrite the key you just created.

You can check your new keys by looking in the .ssh folder

root@client:~# ls .ssh/
id_rsa known_hosts

The one we are interested in here is the public key which ends with .pub. We need to copy this file to /root on the server.
Note: You can do this via scp or copy it onto a thumbdrive or even type it in from a printout if you like! I will leave it up to you to decide the best method in your situation.

On the server, we will need to login as the root user;

Now, we should have the public key file that we copied earlier in our root directory. Let's double check that;

root@server:~# ls -al *.pub
-rw-r--r-- 1 root root 392 2010-08-02 08:22

Great, it is there! We need to add this key to the root users authorized_keys file;

cat >> .ssh/authorized_keys

We can test that this worked by going back to our client PC and logging into the server via ssh;

root@client:~# ssh root@server
Linux server 2.6.32-25-generic-pae #44-Ubuntu SMP Fri Sep 17 21:57:48 UTC 2010 i686 GNU/Linux
Ubuntu 10.04.1 LTS

Welcome to Ubuntu!
* Documentation:
Last login: Thu Oct 14 15:38:57 2010 from client

If it didn't ask you to enter a password then you are cooking with gas!

Tuesday, 4 August 2009

Virtualbox3 Headless with Bridged Networking

Note: This howto is now outdated due to changes introduced in Virtualbox 3.1x

See this post for an updated version.

As of karmic koala, Vbox 3 is provided via the standard Ubuntu repos. Unfortunately, this is the OSE version and it does not appear to work headless.

So, we have to download the "free" version from the Sun (soon to be Oracle?) website, which is currently here

At the time of writing there was no Karmic build, so I used the Jaunty package (virtualbox-3.0_3.0.4-50677_Ubuntu_jaunty_i386.deb)

Before we can install the deb, we will also need to install some dependencies.
sudo apt-get install python2.5 libcurl3 dkms libqt4-network libqtgui4 libxslt1.1

Now we can install the virtualbox deb that we downloaded earlier.
sudo dpkg -i virtualbox-3.0_3.0.4-50677_Ubuntu_jaunty_i386.deb

NOTE: When I installed this for the nth time I received the following error:
virtualbox-3.0.postinst: 118: /etc/init.d/vboxdrv: not found
I'm not sure if this was due to my previous installations of different versions or not. I figured it was so ignored it and things seemed to be OK. Of course YMMV.

Next, add your user account to the vboxusers group
sudo adduser brettg vboxusers

Virtualbox machines that you create will by default go in your home directory

Ensure vboxusers have appropriate permissions to the kernel
sudo vi /etc/udev/rules.d/40-permissions.rules

KERNEL=="vboxdrv", GROUP="vboxusers", MODE="0660"

Creating a virtual machine
Create a machine named "io"
VBoxManage createvm -name io -register

Configure it with a nic bridged to eth0
VBoxManage modifyvm io --nic1 bridged --bridgeadapter1 eth0

Create a virtual DVD link called "dvd" to an ISO image on the server
VBoxManage registerimage dvd /store/archive/ISO/ubuntu-8.04-server-i386.iso

Connect the DVD to the virtual machine
VBoxManage modifyvm io -dvd /store/archive/ISO/ubuntu-8.04-server-i386.iso

Assign "io" 128Mb RAM, enable acpi and set to boot from DVD
VBoxManage modifyvm io -memory 128MB -acpi on -boot1 dvd 

Create an 8Gb virtual HDD named "io-sda.vdi"
VBoxManage createvdi -filename io-sda.vdi -size 8000 -register

Assign that Virtual Drive Image to "io"
VBoxManage modifyvm io -hda io-sda.vdi

Because we are installing Ubuntu Server as a guest we need to enable PAE
VBoxManage modifyvm io -pae on

Using the virtual machine
Start the machine
VBoxHeadless -startvm "io" &

On a GUI workstation, establish a remote desktop connection to the machine
rdesktop -a 16 io:3389

Congratulations, you are now up and running!

After you have installed the OS, you need to remove the DVD and instruct the machine to boot from the hdd.
VBoxManage modifyvm "io" -dvd none

You can also deregister the dvd image if you don't intend to use it again.
VBoxManage unregisterimage dvd /store/archive/ISO/ubuntu-8.04-server-i386.iso

Note: When I installed Ubuntu Server the network autodetection didn't work. After installation was completed there was no eth0 present. I simply added the following to /etc/network/interfaces
auto eth0
iface eth0 inet dhcp

and was then up and running

Other useful commands;
VBoxManage showvminfo io
VBoxManage list hdds
VBoxManage list runningvms
VBoxManage controlvm io poweroff
VBoxManage controlvm "io" savestate

Monday, 3 August 2009

Problems adding permissions in vmware server

Stop the web management service
sudo /etc/init.d/vmware-mgmt stop

Edit the authorisation file
vi /etc/vmware/hostd/authorization.xml

locate this line;

Change it to read;

Restart the management service
/etc/init.d/vmware-mgmt start

Monday, 27 July 2009

HOWTO: Backup with Amanda on 10.04 "Lucid"

I have updated this guide taking into account some changes that have been introduced into the latest versions amanda which ships with Ubuntu 10.04 "Lucid Lynx". The version at the time of writing is amanda 2.6.1p1.

Before installing amanda, your system should be able to send emails

I have a machine setup as a server/NAS and I have another machine which runs amanda for backing up user home directories. As this is only for a couple of home users I am happy for the backup to only run once a week on a 28 day cycle (roughly 4 backups a month). The NAS server is called "callisto" and the amanda server is named "ganymede"

The user data currently sits at around 60Gb. I intend to set a maximum size of 100Gb for user data on my server therefore I will configure full weekly backups to single 100Gb virtual tapes of which there will be 5 in total. These virtual tapes will be directories on an 2Tb external HDD which is mounted on /backups.

The settings here are for my own network and are intended to be a personal reference for when I need to set things up again. Feel free to follow these steps but take note that you will need to modify ip address and disk path details to suit your own setup.

Here is the procedure;

Install packages
sudo apt-get install xinetd amanda-server amanda-client dump

Create an xinetd entry for amanda
sudo vi /etc/xinetd.d/amanda

# default: on
# description: The amanda service

service amanda
socket_type = stream
protocol = tcp
wait = no
user = backup
group = backup
groups = yes
server = /usr/lib/amanda/amandad
server_args = -auth=bsdtcp amdump amindexd amidxtaped
disable = no

Restart xinetd
sudo /etc/init.d/xinetd restart

Create the main holding disk directory
mkdir -m 770 /dumps

Set file permissions;
sudo chown backup:backup /etc/amanda
sudo chown backup:backup /backups
sudo chown backup:backup /dumps
sudo chown backup:backup /etc/amandahosts

Change to backup user
sudo -u backup -s

Create the "weekly" directory where our amanda configs will be kept;
mkdir -m 770 /etc/amanda/weekly

Create amanda.conf
vi /etc/amanda/weekly/amanda.conf

org ""       # your organization name for reports
mailto "" # space separated list
dumpuser "backup" # the user to run dumps under
displayunit "g" # Possible values: "k|m|g|t"

netusage 10000 Kbps # maximum net bandwidth for Amanda, in KB per sec

dumpcycle 28 # the number of days in the normal dump cycle
runspercycle 4 # the number of amdump runs in dumpcycle days
tapecycle 5 tapes # the number of tapes in rotation
usetimestamps yes

bumpsize 20 Mb # minimum savings (threshold) to bump level 1 -> 2
bumppercent 20 # minimum savings (threshold) to bump level 1 -> 2
bumpdays 1 # minimum days at each level
bumpmult 4 # threshold = bumpsize * bumpmult^(level-1)

inparallel 8

etimeout 300 # number of seconds per filesystem for estimates.
dtimeout 1800 # number of idle seconds before a dump is aborted.
ctimeout 30 # number of seconds that amcheck waits per host

runtapes 1 # number of tapes to be used in a single run of amdump
tpchanger "chg-disk" # the tape-changer glue script

tapedev "file:/backups/weekly/slots"

changerfile "/etc/amanda/weekly/changer"

maxdumpsize -1 # Maximum number of bytes the planner will schedule

tapetype HARDDISK

define tapetype HARDDISK {
length 100 gbytes

amrecover_do_fsf yes
amrecover_check_label yes
amrecover_changer "changer"

holdingdisk hd1 {
comment "main holding disk"
directory "/dumps" # where the holding disk is
use 200 Gb # how much space can we use on it
chunksize 2Gb # size of chunks

reserve 25 # percent reserved for degraded backups

autoflush no

infofile "/etc/amanda/weekly/curinfo" # database DIRECTORY
logdir "/etc/amanda/weekly" # log directory
indexdir "/etc/amanda/weekly/index" # index directory

define dumptype global {
program "GNUTAR"
comment "Global definitions"
# exclude list "/etc/amanda/exclude.gtar"
auth "bsd"

define dumptype full {
comment "Full dump of this filesystem always"
priority medium
compress none
dumpcycle 0

define dumptype full-compress {
comment "Full dump of this filesystem always"
priority high
compress server fast
dumpcycle 0

define dumptype normal {
comment "partitions dumped with tar"
priority low
compress none

define dumptype normal-compress {
comment "dump with tar, compress with gzip"
priority low
compress server fast

define interface local {
comment "a local disk"
use 2000 kbps

define interface eth0 {
comment "1000 Mbps ethernet"
use 1000 kbps

Create the disklist file

The format for the disklist file is :
host directory dumptype
Note: We defined the dumptypes in /etc/amanda/amanda.conf

vi /etc/amanda/weekly/disklist

callisto /exports/homes normal

Virtual Tapes

Create an empty tapelist file
touch /etc/amanda/weekly/tapelist

Create the location and set permissions for the virtual tapes
mkdir -p -m 770 /backups/weekly/slots

CD to the new directory
cd /backups/weekly/slots

Create the tape directories
for ((i=1; $i<=5; i++)); do mkdir slot$i; done

Create symlink for the data directory to point to the first tape
ln -s slot1 data

Label the tapes
for ((i=1; $i<=5; i++)); do amlabel weekly weekly-0$i slot $i; done

You should see 5 tapes are labeled which looks like this:
labeling tape in slot 1 (file:/backups/weekly/slots):
Reading label...
Found an empty tape.
Writing label weekly-01..
Checking label...

We need to reset the changer back to slot 1
amtape weekly reset

Now we need to configure the client. Log on to the client ""

ssh brettg@callisto

Install the amanda client package;

sudo apt-get install amanda-client

Edit the amandahosts file
vi /etc/amandahosts

The format of this file is;
host user
where “host” refers to the client that is to be backed up and “user” is the user account that is authorised to do the backup.

This is what my /etc/amandahosts looks like;

ganymede backup amdump backup amdump
ganymede root amindexd amidxtaped root amindexd amidxtaped

If you want to exclude certain files or locations from the backup you will need to create an "excludes.gtar" file which lists your exclusions. Place this file in /etc/amanda/ on the client and uncomment the appropriate line in the "Global" dump definition in your amanda.conf file.

Run amcheck on Server to verify configuration files, connections, etc.
amcheck weekly

If all went well you should see this:

backup@ganymede:/backups/weekly/slots$ amcheck weekly
Amanda Tape Server Host Check
Holding disk /dumps: 485 GB disk space available, using 200 GB as requested
slot 1:read label `weekly-01', date `X'.
NOTE: skipping tape-writable test
Tape weekly-01 label ok
NOTE: host info dir /etc/amanda/weekly/curinfo/callisto does not exist
NOTE: it will be created on the next run.
NOTE: index dir /etc/amanda/weekly/index/callisto does not exist
NOTE: it will be created on the next run.
Server check took 15.763 seconds

Amanda Backup Client Hosts Check
Client check: 1 host checked in 1.864 seconds. 0 problems found.

(brought to you by Amanda 2.6.1p1)

And that's it. Amanda is setup and ready to go!

Doing a manual backup (log in as backup user)
amdump weekly
You can add that command to the backup users crontab for automated backups

Restoring the backup (log in as backup user)

Change to the target directory (backup user must have write permissions here)

cd /store/tmp/restore

Select the virtual tape to restore from;

/usr/sbin/amtape weekly slot 1

Restore from the virtual tape

/usr/sbin/amrestore file:/backups/weekly/slots callisto /exports/homes

Finally, to unpack the files simply use tar

sudo tar xvf callisto._store_users.20090802.0

So there you have it, backing up user files to virtual tapes hosted on an external USB is all up and running. Go grab yourself a beer!

Wednesday, 15 July 2009

Disable pasword authentication for SSH server

in /etc/ssh/sshd_config

RSAAuthentication yes
PubkeyAuthentication yes
PasswordAuthentication no
UsePAM no
ChallengeResponseAuthentication no

Tuesday, 23 June 2009

Paths and Environment

To Display your environment settings;


Add a path;

export PATH=$PATH:/your/new/path

Put this in .bashrc to create a “path” command;

alias path='echo -e ${PATH//:/\\n}'

Software RAID on 10.04 Lucid Lynx

Software RAID is a bit of a pain, but hardware RAID controllers that work properly in Linux are too damned expensive for a simple home server.

Here's some notes.
To use RAID, you need to have two or more drives partitioned as Linux RAID members and you need to note down which of these devices will become members of your array. This guide will not cover how to partition drives suffice to say that you can do it using fdisk (console) or gparted (gui). Remember, as always, Google is your friend!

The first step (after partitioning your target drives) is to install the Linux raid utils package.

sudo apt-get install mdadm

This is the command I use to create a 4 drive RAID0 (stripe, no parity) array

sudo mdadm --create /dev/md0 --level=0 --raid-devices=4 /dev/sdd /dev/sde /dev/sdf /dev/sdg

Simply change the "level" to make a RAID5 (stripe with parity) array

sudo mdadm --create /dev/md0 --level=5 --raid-devices=4 /dev/sdd /dev/sde /dev/sdf /dev/sdg

To add another drive later on use this command;

sudo mdadm /dev/md0 --add /dev/sdh

It seems that every time I have to rebuild a system with a pre-existing array I have trouble with it automagically mounting an array called /dev/md_d0 at reboot and everything gets borked until you manually fix it. This is what I have to do;

Logon as su (you need to do this in order to run the mkconf)

sudo -i

Stop the automagic array

mdadm --manage /dev/md_d0 --stop

Re-create the array properly

mdadm --create /dev/md0 --level=0 --raid-devices=4 /dev/sdd /dev/sde /dev/sdf /dev/sdg

Recreate the mdadm config for the array

/usr/share/mdadm/mkconf > /etc/mdadm/mdadm.conf

I prefer to use UUID rather than discrete devices whenever possible;

Find the UUID of the array.

blkid /dev/md0

This will return something like this;

/dev/md0: UUID="895c982b-5d2c-4909-b5bf-4ba5a1d049e9" TYPE="ext3"

Add a line to automatically mount the array in /etc/fstab

vi /etc/fstab

Here is a typical line

UUID=895c982b-5d2c-4909-b5bf-4ba5a1d049e9 /store ext3 defaults,relatime,errors=remount-ro 0 2
You need to change the the UUID to the one that was returned by the above blkid command as well as the mount point that you want to mount it on. Make sure you create the appropriate mount point too!

Once this is all done you should be up and running.

To permanently remove raid member disks;

sudo mdadm --zero-superblock /dev/sdX

Good luck and have fun with the penguin!

(last revised 12/08/2010)

Tuesday, 16 June 2009

Playing With Eucalyptus

I've been playing about with Eucalyptus using the guide found here

Unfortunately, that guide makes some assumptions and doesn't make it clear exactly what and where you should be performing the individual steps.

So, here is my modified version that is hopefully a bit less ambiguous.

A Eucalyptus system includes three high level packages:

eucalyptus-cloud - Provides front-end services (Cloud Controller) & the Walrus storage system.
eucalyptus-cc - Provides the Cluster Controller that provides support for the virtual network overlay
eucalyptus-nc - The Node Controller(s) interacts with KVM to manage the individual VMs

In the basic Eucalyptus setup I am building, the system is composed of two machines (a front-end and a node). The front end runs both eucalyptus-cloud and eucalyptus-cc. Because all these server components communicate via network messages, it is possible to separate the cloud and the cluster controller if required for larger or more complex multi-host setups.

Initially it was my idea to run the whole thing in virtual machines on my existing vmware server network, but I soon discovered that the virtualisation features of KVM would not work in an already virtualised machine so currently I have the cloud machine (including the CC) on a vmware guest and the cloudnc1 running on an old IBM NetVista desktop PC.

Step ONE: Install the OS on all target machines.

Obviously we are going to use Jaunty Server as that is the first version of Ubuntu that natively supports Eucalyptus. Simply do a standard install from the Jaunty CD to your target PC's.

Optional: Setup apt-cacher-ng

Step TWO: Configuring the network.

Setup your local dns (or /etc/hosts on each machine) to apply names to the appropriate machines. In my case I use a local dns server but if you use hosts files you would put this in each file, adjusting to suit your particular IP subnet of course); cloud, cloudcc cloudnc1

You should be able to ping each machine by name now.

On the NC, you need to configure the network interface as a bridge. Here is a minimal example for /etc/network/interfaces;
auto lo
iface lo inet loopback

auto br0
iface br0 inet static
bridge_ports eth0
bridge_fd 9
bridge_hello 2
bridge_maxage 12
bridge_stp off

Step THREE: Installing eucalyptus

sudo apt-get install eucalyptus-cloud eucalyptus-cc (on the cloud/cc machine)

sudo apt-get install eucalyptus-nc (on the node controller machines)

Now, edit /etc/eucalyptus.conf and change the line starting with VNET to;


The remainder of this article is based on the original documentation from the Eucalyptus website and modified to be Jaunty specific and to clarify the places I found hard to understand.

a. Front-end Configuration

To connect the Eucalyptus components together, you will need to register the Cluster with the Cloud, and register each Node with the Cluster. On the front-end, do:

brettg@cloud:~$ sudo /usr/sbin/euca_conf -addcluster testcluster cloudcc /etc/eucalyptus/eucalyptus.conf
New cluster 'cloudcc' on host 'cloudcc' successfully added.

Add the hostname for the node controller;

brettg@cloud:~$ sudo /usr/sbin/euca_conf -addnode cloudnc1 /etc/eucalyptus/eucalyptus.conf
[sudo] password for brettg:
First, please run the following commands on 'cloudnc1':

sudo apt-get install eucalyptus-nc
sudo tee ~eucalyptus/.ssh/authorized_keys > /dev/null [[EOT
ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAyjVCCEXvkWYEy6DPaaPbndYBsGOrKZfKqlRi/WA7OMLYnOJ229dz3f5y+KgSqEOAsyQDsuk2WnK+wleQ82HJdAOV9z1MAUZC0bH2lV5NbTwYfWNotPZal+Pey5zhhOsdx0Qzir2pYDuAYJvRopfuTpCzPybAj/bUj943iDWTCMrGGr0NZsY4tOPHekKgDyph5c3S4U4odqnBWGAYPZIRSzf+BBs+Z3xK8+vsroNsC79TkZ/lXQMEOAkgytHuxVJ9FU5V5mTzwJxsg8nBVrkxgkNiIsB9aSHZQk0KbOPJ0leejI7UPNstXi3HAzrwMrpRAKi/Bu6+2hkMkJmS4t+EGQ== eucalyptus@cloud

hit return to continue

At this point you need to copy everything beginning with "sudo tee" and ending with "EOT" (inclusive) and then login to the NC and paste it into a bash shell.

After you do that, hit "enter" and you should see something like this;

cloud-cert.pem 100% 1289 1.3KB/s 00:00
cloud-pk.pem 100% 1675 1.6KB/s 00:00
cluster-cert.pem 100% 1302 1.3KB/s 00:00
cluster-pk.pem 100% 1679 1.6KB/s 00:00
clusters.p12 100% 7539 7.4KB/s 00:00
euca.p12 100% 5035 4.9KB/s 00:00
nc-client-policy.xml 100% 2834 2.8KB/s 00:00
node-cert.pem 100% 1302 1.3KB/s 00:00
node-pk.pem 100% 1675 1.6KB/s 00:00
users.p12 100% 2646 2.6KB/s 00:00
SUCCESS: added node 'cloudnc1' to '/etc/eucalyptus/eucalyptus.conf'

2. Running Eucalyptus

First, make sure that you have all of the runtime dependencies of Eucalyptus installed, based on your chosen set of configuration parameters. If there is a problem with runtime dependencies (for instance, if Eucalyptus cannot find/interact with them), all errors will be reported in log files located in /var/log/eucalyptus on the front end.

Use the init-scripts to restart each component on the appropriate host.

On the front-end;

sudo /etc/init.d/eucalyptus-cloud restart
sudo /etc/init.d/eucalyptus-cc restart

And on compute node you would run:

sudo /etc/init.d/eucalyptus-nc restart

3. First-time Run-time Setup

To configure eucalyptus, after you you have started all components, login to;

(WARNING: on some machines it may take few minutes after starting the Cloud Controller for the URL to be responsive) You will be prompted for a user/password which is set to admin/admin. Upon logging in you will be guided through three first-time tasks:

1. You will be forced to change the admin password.
2. You will be asked to set the admin's email address.
3. You will be asked to confirm the URL of the Walrus service

To use the system with the EC2 client tools, you must generate user credentials. Click the 'Credentials' tab and download your certificates via the 'Download certificates' button. You will be able to use these x509 certificates with Amazon EC2 tools and third-party tools like

On your admin workstation create a directory;

mkdir $HOME/.euca

unpack the credentials into it, and execute the included 'eucarc':

. $HOME/.euca/eucarc

Note: that you will have to enter the full path to this file every time you intend to use the EC2 command-line tools, or you may add it to your local default environment.

Note: As for getting a virtual machine image working I have yet to figure that much out. The documentation is again rather lacking in that regard,

Monday, 18 May 2009

HOWTO: Using apt-cacher-ng to cache packages

If you have a lot of machines to manage, you don't want them all reaching out to the internet to retrieve updates and using up your precious bandwidth. What you want is a local machine that can keep copies of all the files that your machines download from the repositories so that the next machine that asks for the same file can just be given the copy that has already been downloaded. You could of course use squid, but squid is a less than perfect solution as it is not designed to ensure files are kept on hand until they are no longer needed.

apt-cacher-ng is a purpose built application that keeps track of all the packages that have been downloaded. If a client requests some-package.v123.deb it checks to see if it is already cached locally and if so it provides it to the client.

If the package has not previously been requested, it then downloads the package, provides it to the client, adds it to the cache and, here is the important part, deletes all older, outdated versions of that file!

It will keep cached files indefinitely, unlike squid which is designed to flush files if they remain unrequested for a predefined period.

So, to use apt-cacher-ng, your /etc/apt/sources file doesn't need to be modified or configured in any special way. Just leave it configured as you would normally (in my case these are the official aus repos with the source repos removed);


deb lucid main restricted
deb lucid-updates main restricted
deb lucid-backports main restricted
deb lucid-backports universe multiverse
deb lucid universe multiverse
deb lucid-updates universe multiverse
deb lucid-security main restricted
deb lucid-security universe multiverse

To install apt-cacher-ng, simply do;

sudo apt-get install apt-cacher-ng

And that is it, there is no further configuration needed!

For each client however, you do need to make a few small changes;

Create (or edit) the file /etc/apt/apt.conf and add the following line (substituting the IP address for the address of your server of course!)

Acquire::http { Proxy "http:"; };

If you use Synaptic, you also need to modify another file;

sudo vi /root/.synaptic/synaptic.conf

Add the following lines to the end but before the final brace (ie before the " }; ')

useProxy "1";
httpProxy "";
httpProxyPort "3142";
ftpProxy "";
ftpProxyPort "3142";
noProxy "";

For the changes to be applied you need to do an apt-get update;

sudo apt-get update

If this completes without error you are done! It is that easy.

Thursday, 14 May 2009

HOWTO: Deluge on a Headless Server

Install Deluge on the server;

sudo apt-get install deluged deluge-console deluge-web

Before we start the deluge daemon, we will tell it which user(s) can connect. To do this we add a "user" to the authentication file;

mkdir -p ~.config/deluge
echo "username:password" >> ~/.config/deluge/auth

The username and password can be anything you like, they do not have to correspond with a username and password combo from UNIX userland.

Now we can start the deluge daemon;


By default connections from remote hosts are disabled. We need to enable them using the deluge console.

Load the Console UI;


Entering the following command into the console;

config -s allow_remote True

Exit the Console UI with the 'exit' command;


Open your deluge client on your workstation and open the 'Preferences' dialog. Disable 'Classic mode' on the 'Interface' page.

Restart the client and you should now be able to use the GTK Deluge client to connect to your Deluge server using the 'Connection Manager' dialog.

To start the web process;

deluge-web &

This small script will start both process's and then confirm that they have started using 'ps'

deluge-web &
ps ax | grep deluge | grep -v grep

You can now browse to your server on port 8112;


The default password is 'deluge'. You should change this of course.

You do that using the WebUI in 'Preferences > Interface'

Tuesday, 14 April 2009

Windows 7

It occurs to me that one of the main reason that corps are avoiding the move to Vista, and apparently W7 now too, is the perceived compatibility problems and the retraining that would be required.

Interestingly, these are also the two main reasons usually touted to explain why Linux will never displace Windows on the desktop.

On top of that, there is the whole bloat factor that would require corps to roll out vast numbers of hardware replacements to cope with the vastly increased resource demands of MS latest offerings, were they to choose the upgrade option. This is a problem that Linux does not suffer from.

It seems to me that the only thing keeping most corps on Windows is the "If it aint broke, don't fix it" attitude, but eventually XP will become broken, if only due to extreme old age.

The only question then becomes, how many MS generations will have been skipped by corps? Inevitably, each new windows version will be "improved" (ie made different) and the longer lusers get used to XP the harder it will be be to shift them to what will possibly be a vastly different OS.

Ironically, Linux has the potential to be a less painful transition target than Microsofts.

You see, Microsoft has a problem. For years they have maintained an unholy symbiosis with hardware manufacturers. MS demands that their oems sell Windows *exclusively* in return for "marketing assistance". In return, MS promises to greatly increase the hardware requirements of each new release in order to "stimulate" hardware sales.

This worked fine for a decade or so, as the first versions of Windows were undeniably crappy. WFW 3.11 was the first usable version of Windows, albeit it was little more than a glorified menu system. As well, despite having the same clunky gui as WFW3.11, Windows NT 3.5 was a huge improvement for corps as far as networking and stability was concerned. When the W98/NT4.0 user interface was introduced it added significant additional memory and other hardware requirements but the improvements on offer were worth the upgrades. Also, people weren't dumbed down into a user interface monoculture at that point so they were more amenable to adapting.

Finally, along came W2K/XP. At this point we have the unification of the W95/98 branch with the NT branch. Once again, corporate friendly improvements were added with active directory and the new driver model amongst other things and again, the additional hardware requirements were worth these features alone, and the user interface had remained largely unchanged since the release of 95 so for corps it was a no brainer.

Microsoft sells operating systems, corporates get much more control over their increasing numbers of seats and hardware manufactures receive a steady stream of orders as companies purchase seemingly endless numbers of new PC's to support the newer OS's.

Then MS dropped the ball. After huge amount of blowing their own horns about the amazing new feature set of their next OS, "longhorn", which was intended to be rewritten entirely from scratch, we eventually, after a previously unheard of delay between releases, they eventually served up the steaming turd that is a vista. Longhorn had been quietly and unceremoniously dumped a year or two earlier when it was realised that they were never going to make it work and the industry were starting to make jokes about the ever increasing list of dropped features and delay announcements coming from Redmond. Microsoft was quickly becoming the laughing stock of the industry.

So, it was decided to dust off the old XP code and polish it up and call it a new release. A whole lot of bloat was added in the form of DRM restrictions which are in no way an enticing "feature" as far as corporates are concerned. Security was "improved" in the form of UAC, which might possibly be of small value to home users but to corps, who in the most part have their desktops already locked down and don't want their users to have access to admin rights then once again this new feature was totally unwanted.

To make things worse, in their hurry to differentiate Vista from XP, MS had slapped together a fancy new 3D gui which ultimately was to be Vistas primary "selling point". This too offered nothing to corporate users because not only does the new interface require vast investments in new hardware but it requires that their users (users who have spent 5+ years using XP and have long forgotten how to adapt to new interfaces) will require retraining.

The bottom line is that to move to Vista, corps receive VERY little benefit and what little they get come at great cost.

So, what about the great Saviour that is Windows 7? MS say they have stripped the bloat. I will ignore speculating on exactly why the bloat was there in the first place and simply wonder about how they have done so. I haven't played with W7. Some people report that it is in fact more nimble and less demanding on hardware, but in this game astroturfers, shills and fanboi's abound so it is hard to say.

But what I can say that if it is not substantially similar to XP in the areas of compatibility, user interface and hardware requirements then I doubt very much that corps will have much interest in it either, and quite frankly I cant see Microsoft pulling out most of the code they added with Vista, I think it is far more likely they have simply redone some of the hurried code they produced in the rush to create a "new" product in the wake of the longhorn debacle.

Meanwhile, we have the biggest financial crisis since the 1930's on our hands and businesses are hardly of a mind to go out and purchase new fleets of PC's with the latest Windows extravaganza preloaded. I reckon businesses will continue to ride the depression out as best they can with the equipment they have now. But if Microsoft tries too hard to bully them into dropping XP in favour of W8 I think they might find they will be more successful than they think at shifting business over to a newer platform, it just might not be the platform they are hoping they will move to.

Perhaps 2010 will be the year of Tux?

Saturday, 28 March 2009

Fix broken keyboard mapping - VMWare in Ubuntu Intrepid

If all your arrows and special keys have stopped working in vmware, simply edit you vmware config file

sudo gedit /etc/vmware/config

and add this line

xkeymap.nokeycodeMap = true

You will need to close down vmware and restart to make the change take effect

Friday, 27 March 2009

UPDATED: Using SSMTP to send mail via GMail

UPDATED July 2014, to reflect the fact that google smtp servers are on port 587 now.

Often you will have a bash script that requires the ability to send emails out, such as alerts and status updates. Or perhaps you just want to get the normal alerts that all Unix servers attempt to send from time to time.

Out of the box Linux systems don't send email unless they are configured to do so. You can muck about configuring sendmail to use a "smarthost" if you like or you can do what I do and use a simple little smtp service that is quick and easy to configure to send mails via a GMail account.

If you have your own domain then it is probably better to use sendmail to send mails direct rather than relaying mail via the Big G

Obviously, for this all to work you will need a GMail account of your own. Is GMail still invite only? I have no idea. If it is and you need an invite then enter a comment below with your email address and I will be happy to send you one!

OK, let's get started.

First we need to remove sendmail and install ssmtp

sudo apt-get remove sendmail
sudo apt-get install ssmtp mailutils

SSMTP requires a little configuring so edit /etc/ssmtp/ssmtp.conf;

sudo vi /etc/ssmtp/ssmtp.conf

Enter the details for the following fields;

mailhub =
rewritedomain =

You should replace "" with your own domain as well as provide the details for your GMail account. Using 'myusername' and 'mypassword' is a recipe for FAIL

Testing to see if things work.

Send an email from the command line like this;

mail -s "test config"

To see if it was sent you can check /var/log/mail.log;

tail -f /var/log/mail.log

You should see your email logged as "SENT" in the output there.

Here is a sample ssmtp.conf for your convenience;
# Config file for sSMTP sendmail
# The person who gets all mail for userids < 1000
# Make this empty to disable rewriting.

# place where the mail goes. The actual machine name is required no 
# MX records are consulted. Commonly mailhosts are named

# Where will the mail seem to come from?

# The full hostname

# Are users allowed to set their own From: address?
# YES - Allow the user to specify their own From: address
# NO - Use the system generated From: address

And that's it! Simples!

[ Tested and confirmed on Lucid and Natty servers ]

Thursday, 12 March 2009

Disappearing network interfaces in Ubuntu Server

If you change network cards on ubuntu server then you will find that the new cards no longer come up. This also occurs when you copy a vmware virtual machine.

To fix this, edit the persistent-net-rules file

sudo vi /etc/udev/rules.d/70-persistent-net.rules

You should see a line (plus a commented heading) for the affected interface(s) like this;
# PCI device 0x14e4:0x1659 (r8169)
SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}=="00:1c:c0:db:32:e7", ATTR{dev_id}=="0x0", ATTR{type}=="1", KERNEL=="eth*", NAME="eth0"

(Forgive the formatting but has a ridiculously narrow content column and long lines tend to disappear behind crap on the RHS of the page or wrap around and look like multiple lines)

Anyway, all you need to do is delete the line for the affected interface and reboot your system.

Once the system has rebooted, the persistent-net-rules file will be repopulated with the details for the new interface and your Ethernet adapter will be working once again.

Thursday, 5 March 2009

DEPRECIATED: Setting the hostname in gnome terminal

If you SSH to another host you might want to display the hostname of the remote host in the gnome terminal title bar

Put this code in your ~/.bashrc file;

case $TERM in
PROMPT_COMMAND='echo -ne "\033]0;${USER}($(id -ng))@${HOSTNAME}: