Now I've reached the stage where that NAS is full so I want to add a new drive. The existing Volume Group consisted of 2 x 2TB hard drives and I'm adding a third.
The system is Ubuntu 13.04 "raring" but the following should work equally well on any Debian (or indeed Redhat) based system.
Here is what I did.
The first and most important step is to backup all the data on your drive! When we play around with disks and partitioning we can destroy terabytes of data with just one mistake. I used a 4TB external USB drive. You have been warned. OK, second thing I did naturally, was install the new drive, which in my case also required installing a new SATA interface in the server, which went surprisingly smoothly.
OK, with all the preliminaries in place, we can proceed to making use of the new hard drive.
I used
fdisk -l
to determine that my new drive was detected by the system and that it is device /dev/sdf.I used parted to partition it.
Also using parted, I set the LVM flag:
set 1 lvm on
Back to the bash console, I added the drive as a new physical volume:
# pvcreate /dev/sdf1
Confirm that this went OK.
# pvdisplay
--- Physical volume ---
PV Name /dev/sdc1
VG Name store
PV Size 1.82 TiB / not usable 4.09 MiB
Allocatable yes (but full)
PE Size 4.00 MiB
Total PE 476931
Free PE 0
Allocated PE 476931
PV UUID 2dvFBg-uAFn-kudl-JW14-UkgE-EvPq-G5Ye5Q
--- Physical volume ---
PV Name /dev/sde1
VG Name store
PV Size 1.82 TiB / not usable 4.09 MiB
Allocatable yes (but full)
PE Size 4.00 MiB
Total PE 476931
Free PE 0
Allocated PE 476931
PV UUID BH8Osp-7UR7-6YVh-sbVe-zvCh-6WC9-hQkozl
"/dev/sdf1" is a new physical volume of "1.82 TiB"
--- NEW Physical volume ---
PV Name /dev/sdf1
VG Name
PV Size 1.82 TiB
Allocatable NO
PE Size 0
Total PE 0
Free PE 0
Allocated PE 0
PV UUID 29ZZUV-qg34-Vcm9-Gw4M-iGau-eTiD-E0geXp
My VG (Volume Group) is named store, you can determine this with the vgdisplay command:
# vgdisplay
--- Volume group ---
VG Name store
[...]
VG Size 3.64 TiB
[...]
Note: I have cut out the info that we are not interested in from this output, as indicated by
[...]
Now, extend the VG "store" onto my new drive:
# vgextend store /dev/sdf1
Check the VG again to find that my VG has grown to 5.46T:
# vgdisplay
--- Volume group ---
VG Name store
[...]
VG Size 5.46 TiB
[...]
Alloc PE / Size 953862 / 3.64 TiB
Free PE / Size 476931 / 1.82 TiB
[...]
I want to use all the space for my existing LV (logical volume), so I must extend it with the lvextend command.
Before we do that, let's have a look at the existing LV details:
# lvdisplay
--- Logical volume ---
LV Path /dev/store/library
LV Name library
VG Name store
LV UUID Jo8uag-AkXk-8p7s-4x01-IFS9-FuWz-3MMSQH
LV Write Access read/write
LV Creation host, time jupiter, 2013-05-17 17:25:01 +1000
LV Status available
# open 1
LV Size
3.64 TiB
Current LE
953862
Segments 2
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 252:0
I can see from that output that my LV is named "library". From the output of the above two commands we can assemble a command to extend the LV. Note that from the earlier vgdisplay command we got this information:
Alloc PE / Size 953862 / 3.64 TiB
Free PE / Size 476931 / 1.82 TiB
If we take those two numbers and add them together (953862 + 476931) we can get the total number of "extents" in the volume group (1430793) and we can use that to extend the LV to take up all the free space:
# lvextend -l1430793 /dev/store/library
Extending logical volume library to 5.46 TiB
Logical volume library successfully resized
In the comments, Andrew suggests an easier way to allocate all the free space:
# lvextend -l +100%FREE /dev/store/library
Check that everything is OK:
# lvdisplay
--- Logical volume ---
LV Path /dev/store/library
LV Name library
VG Name store
LV UUID Jo8uag-AkXk-8p7s-4x01-IFS9-FuWz-3MMSQH
LV Write Access read/write
LV Creation host, time jupiter, 2013-05-17 17:25:01 +1000
LV Status available
# open 1
LV Size 5.46 TiB
Current LE 1430793
Segments 3
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 252:0
Everything looks OK so we proceed to the final part, which is to resize the actual file system. This is also the most dangerous part. You did make a backup right?
Before we can resize the file system, you will need to umount it;
# umount /dev/mapper/store-library
Just to be safe, make sure there are no file system errors:
# e2fsck -f /dev/mapper/store-library
Finally, resize the file system:
# resize2fs -p /dev/mapper/store-library
Now, the last two commands may take some time to complete, but once they are done (and assuming there were no problems of course) we can remount our LV and check out all that lovely new free space.
# df -h
Filesystem Size Used Avail Use% Mounted on
[..]
/dev/mapper/store-library 5.4T 3.6T 1.8T 67% /store/library
yay!