Tuesday 23 June 2009

Paths and Environment

To Display your environment settings;

printenv

Add a path;

export PATH=$PATH:/your/new/path

Put this in .bashrc to create a “path” command;

alias path='echo -e ${PATH//:/\\n}'

Software RAID on 10.04 Lucid Lynx

Software RAID is a bit of a pain, but hardware RAID controllers that work properly in Linux are too damned expensive for a simple home server.

Here's some notes.
To use RAID, you need to have two or more drives partitioned as Linux RAID members and you need to note down which of these devices will become members of your array. This guide will not cover how to partition drives suffice to say that you can do it using fdisk (console) or gparted (gui). Remember, as always, Google is your friend!

The first step (after partitioning your target drives) is to install the Linux raid utils package.

sudo apt-get install mdadm

This is the command I use to create a 4 drive RAID0 (stripe, no parity) array

sudo mdadm --create /dev/md0 --level=0 --raid-devices=4 /dev/sdd /dev/sde /dev/sdf /dev/sdg

Simply change the "level" to make a RAID5 (stripe with parity) array

sudo mdadm --create /dev/md0 --level=5 --raid-devices=4 /dev/sdd /dev/sde /dev/sdf /dev/sdg

To add another drive later on use this command;

sudo mdadm /dev/md0 --add /dev/sdh

It seems that every time I have to rebuild a system with a pre-existing array I have trouble with it automagically mounting an array called /dev/md_d0 at reboot and everything gets borked until you manually fix it. This is what I have to do;

Logon as su (you need to do this in order to run the mkconf)

sudo -i

Stop the automagic array

mdadm --manage /dev/md_d0 --stop

Re-create the array properly

mdadm --create /dev/md0 --level=0 --raid-devices=4 /dev/sdd /dev/sde /dev/sdf /dev/sdg

Recreate the mdadm config for the array

/usr/share/mdadm/mkconf > /etc/mdadm/mdadm.conf

I prefer to use UUID rather than discrete devices whenever possible;

Find the UUID of the array.

blkid /dev/md0

This will return something like this;

/dev/md0: UUID="895c982b-5d2c-4909-b5bf-4ba5a1d049e9" TYPE="ext3"

Add a line to automatically mount the array in /etc/fstab

vi /etc/fstab

Here is a typical line

UUID=895c982b-5d2c-4909-b5bf-4ba5a1d049e9 /store ext3 defaults,relatime,errors=remount-ro 0 2
You need to change the the UUID to the one that was returned by the above blkid command as well as the mount point that you want to mount it on. Make sure you create the appropriate mount point too!

Once this is all done you should be up and running.

To permanently remove raid member disks;

sudo mdadm --zero-superblock /dev/sdX

Good luck and have fun with the penguin!

(last revised 12/08/2010)

Tuesday 16 June 2009

Playing With Eucalyptus

I've been playing about with Eucalyptus using the guide found here

Unfortunately, that guide makes some assumptions and doesn't make it clear exactly what and where you should be performing the individual steps.

So, here is my modified version that is hopefully a bit less ambiguous.

A Eucalyptus system includes three high level packages:

eucalyptus-cloud - Provides front-end services (Cloud Controller) & the Walrus storage system.
eucalyptus-cc - Provides the Cluster Controller that provides support for the virtual network overlay
eucalyptus-nc - The Node Controller(s) interacts with KVM to manage the individual VMs

In the basic Eucalyptus setup I am building, the system is composed of two machines (a front-end and a node). The front end runs both eucalyptus-cloud and eucalyptus-cc. Because all these server components communicate via network messages, it is possible to separate the cloud and the cluster controller if required for larger or more complex multi-host setups.

Initially it was my idea to run the whole thing in virtual machines on my existing vmware server network, but I soon discovered that the virtualisation features of KVM would not work in an already virtualised machine so currently I have the cloud machine (including the CC) on a vmware guest and the cloudnc1 running on an old IBM NetVista desktop PC.

Step ONE: Install the OS on all target machines.

Obviously we are going to use Jaunty Server as that is the first version of Ubuntu that natively supports Eucalyptus. Simply do a standard install from the Jaunty CD to your target PC's.

Optional: Setup apt-cacher-ng

Step TWO: Configuring the network.

Setup your local dns (or /etc/hosts on each machine) to apply names to the appropriate machines. In my case I use a local dns server but if you use hosts files you would put this in each file, adjusting to suit your particular IP subnet of course);

10.100.1.100 cloud, cloudcc
10.100.0.101 cloudnc1

You should be able to ping each machine by name now.

On the NC, you need to configure the network interface as a bridge. Here is a minimal example for /etc/network/interfaces;
auto lo
iface lo inet loopback

auto br0
iface br0 inet static
address 10.100.0.101
netmask 255.255.255.0
bridge_ports eth0
bridge_fd 9
bridge_hello 2
bridge_maxage 12
bridge_stp off

Step THREE: Installing eucalyptus

sudo apt-get install eucalyptus-cloud eucalyptus-cc (on the cloud/cc machine)

sudo apt-get install eucalyptus-nc (on the node controller machines)

Now, edit /etc/eucalyptus.conf and change the line starting with VNET to;

VNET_BRIDGE="br0"

The remainder of this article is based on the original documentation from the Eucalyptus website and modified to be Jaunty specific and to clarify the places I found hard to understand.

a. Front-end Configuration

To connect the Eucalyptus components together, you will need to register the Cluster with the Cloud, and register each Node with the Cluster. On the front-end, do:

brettg@cloud:~$ sudo /usr/sbin/euca_conf -addcluster testcluster cloudcc /etc/eucalyptus/eucalyptus.conf
New cluster 'cloudcc' on host 'cloudcc' successfully added.


Add the hostname for the node controller;

brettg@cloud:~$ sudo /usr/sbin/euca_conf -addnode cloudnc1 /etc/eucalyptus/eucalyptus.conf
[sudo] password for brettg:
/var/lib/eucalyptus
First, please run the following commands on 'cloudnc1':

sudo apt-get install eucalyptus-nc
sudo tee ~eucalyptus/.ssh/authorized_keys > /dev/null [[EOT
ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAyjVCCEXvkWYEy6DPaaPbndYBsGOrKZfKqlRi/WA7OMLYnOJ229dz3f5y+KgSqEOAsyQDsuk2WnK+wleQ82HJdAOV9z1MAUZC0bH2lV5NbTwYfWNotPZal+Pey5zhhOsdx0Qzir2pYDuAYJvRopfuTpCzPybAj/bUj943iDWTCMrGGr0NZsY4tOPHekKgDyph5c3S4U4odqnBWGAYPZIRSzf+BBs+Z3xK8+vsroNsC79TkZ/lXQMEOAkgytHuxVJ9FU5V5mTzwJxsg8nBVrkxgkNiIsB9aSHZQk0KbOPJ0leejI7UPNstXi3HAzrwMrpRAKi/Bu6+2hkMkJmS4t+EGQ== eucalyptus@cloud
EOT

hit return to continue

At this point you need to copy everything beginning with "sudo tee" and ending with "EOT" (inclusive) and then login to the NC and paste it into a bash shell.

After you do that, hit "enter" and you should see something like this;

cloud-cert.pem 100% 1289 1.3KB/s 00:00
cloud-pk.pem 100% 1675 1.6KB/s 00:00
cluster-cert.pem 100% 1302 1.3KB/s 00:00
cluster-pk.pem 100% 1679 1.6KB/s 00:00
clusters.p12 100% 7539 7.4KB/s 00:00
euca.p12 100% 5035 4.9KB/s 00:00
nc-client-policy.xml 100% 2834 2.8KB/s 00:00
node-cert.pem 100% 1302 1.3KB/s 00:00
node-pk.pem 100% 1675 1.6KB/s 00:00
users.p12 100% 2646 2.6KB/s 00:00
SUCCESS: added node 'cloudnc1' to '/etc/eucalyptus/eucalyptus.conf'


2. Running Eucalyptus

First, make sure that you have all of the runtime dependencies of Eucalyptus installed, based on your chosen set of configuration parameters. If there is a problem with runtime dependencies (for instance, if Eucalyptus cannot find/interact with them), all errors will be reported in log files located in /var/log/eucalyptus on the front end.

Use the init-scripts to restart each component on the appropriate host.

On the front-end;

sudo /etc/init.d/eucalyptus-cloud restart
sudo /etc/init.d/eucalyptus-cc restart

And on compute node you would run:

sudo /etc/init.d/eucalyptus-nc restart

3. First-time Run-time Setup

To configure eucalyptus, after you you have started all components, login to;
https://10.100.1.100:8443/

(WARNING: on some machines it may take few minutes after starting the Cloud Controller for the URL to be responsive) You will be prompted for a user/password which is set to admin/admin. Upon logging in you will be guided through three first-time tasks:

1. You will be forced to change the admin password.
2. You will be asked to set the admin's email address.
3. You will be asked to confirm the URL of the Walrus service

To use the system with the EC2 client tools, you must generate user credentials. Click the 'Credentials' tab and download your certificates via the 'Download certificates' button. You will be able to use these x509 certificates with Amazon EC2 tools and third-party tools like rightscale.com.

On your admin workstation create a directory;

mkdir $HOME/.euca

unpack the credentials into it, and execute the included 'eucarc':

. $HOME/.euca/eucarc

Note: that you will have to enter the full path to this file every time you intend to use the EC2 command-line tools, or you may add it to your local default environment.

Note: As for getting a virtual machine image working I have yet to figure that much out. The documentation is again rather lacking in that regard,