Monday 30 May 2011

HOWTO: Relocate LVM to a New Server

The main storage on my home server is on 2 x 2TB hard disk drives which are configured as an LVM volume.

(For a guide on configuring LVM from scratch see this article.)

My server currently runs Ubuntu 10.04 but I've decided to bite the bullet and swap it over to Debian Squeeze.

I've never moved an LVM volume to another server / OS installation so it's time to learn how to do it, I guess.

Note:
It should be obvious but I will nevertheless say it here anyway. Mucking about with file-systems is a dangerous thing to do and any misstep can lead to disastrous, catastrophic and permanent DATA LOSS! Ensure that you have adequate backups before attempting this procedure. You have been warned!

First, you need to login as root.

sudo -i

Get the details for your current LVM Volume Group(s);

vgdisplay
--- Volume group ---
VG Name store
System ID
Format lvm2
Metadata Areas 2
Metadata Sequence No 6
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 1
Open LV 0
Max PV 0
Cur PV 2
Act PV 2
VG Size 3.64 TiB
PE Size 4.00 MiB
Total PE 953862
Alloc PE / Size 953862 / 3.64 TiB
Free PE / Size 0 / 0
VG UUID 9zwhOn-3Qs6-aPTo-kqQ4-RL4p-ICTA-l56Dsz


As you can see, I have a single volume group called "store".

Let's see what Logical Volumes are in the Volume Group;
lvdisplay
--- Logical volume ---
LV Name /dev/store/archive
VG Name store
LV UUID 80eFYi-n0Z7-9br1-bbfg-1GQ6-Orxf-0wENTU
LV Write Access read/write
LV Status available
# open 1
LV Size 3.64 TiB
Current LE 953862
Segments 2
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 254:0

We can see that there is a single volume 'archive' in the group.

Check your fstab for the line pertaining to your LVM volume;

cat /etc/fstab

The relevant line in my case is this;

UUID=057272e5-8b66-461a-ad18-c1c198c8dcdd /store/archive ext3 errors=remount-ro 0 1

Make sure you keep this info at hand for later on.

It so happens that I am sharing this volume using NFS so I need to stop my NFS server;

service nfs-kernel-server stop

so that I can unmount it.

umount /store/archive/

Now I need to mark the VG as inactive;
vgchange -an store
0 logical volume(s) in volume group "store" now active

Next I prepare the volume group to be moved by "exporting" it;
vgexport store
Volume group "store" successfully exported

Let's take another look at the volume group details;
vgdisplay
Volume group store is exported
--- Volume group ---
VG Name store
System ID
Format lvm2
Metadata Areas 2
Metadata Sequence No 5
VG Access read/write
VG Status exported/resizable
MAX LV 0
Cur LV 1
Open LV 0
Max PV 0
Cur PV 2
Act PV 2
VG Size 3.64 TiB
PE Size 4.00 MiB
Total PE 953862
Alloc PE / Size 953862 / 3.64 TiB
Free PE / Size 0 / 0
VG UUID 9zwhOn-3Qs6-aPTo-kqQ4-RL4p-ICTA-l56Dsz

As you can see, the VG Status has changed to "exported"

Now you can shutdown your system and relocate the drives or reinstall the OS. In my case my OS is installed on a removable Compact Flash card which I have already pre-installed Debian Squeeze. i.e. Here is one I prepared earlier!

OK, once our server has rebooted we need to install LVM and associated utils;

sudo apt-get install lvm2 dmsetup reiserfsprogs xfsprogs

We activate the volume group using the vgchange command again;
vgchange -a y
Volume group "store" is exported

Import the volume group into our new system with the 'vgimport' command;

vgimport store

Let's have a look at our logical volumes again;
lvdisplay
--- Logical volume ---
LV Name /dev/store/archive
VG Name store
LV UUID 80eFYi-n0Z7-9br1-bbfg-1GQ6-Orxf-0wENTU
LV Write Access read/write
LV Status available
# open 0
LV Size 3.64 TiB
Current LE 953862
Segments 2
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 254:0

That looks good. It should be the same as it was in the old system and the LV status should be "available"

Take the line from the fstab file on the old server and add it to the new server;

vi /etc/fstab

Paste the line at the end;

UUID=057272e5-8b66-461a-ad18-c1c198c8dcdd /store/archive ext3 errors=remount-ro 0 1

Recreate the mountpoint if it doesn't already exist;

mkdir -p /store/archive

And finally we can mount the drive;

sudo mount /store/archive/

We can check that it is mounted OK with df;
df
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/sda1 7387992 930944 6081752 14% /
tmpfs 1025604 0 1025604 0% /lib/init/rw
udev 1020856 184 1020672 1% /dev
tmpfs 1025604 0 1025604 0% /dev/shm
/dev/mapper/store-archive
3845710856 2358214040 1292145880 65% /store/archive



And that's it! Glad I didn't need to resort to my backups . . .

No comments: