Logical Volume Manager (LVM)

0/5 Votes: 0
Report this app


I was able to set up a linux volume on RH80 without many problems. I just tried several commands and looked at a couple of man pages and found my way through with no problems. Here’s the steps I took:

  1. Install, setup, and configure the hard drive(s):

    I had 5 raid storage boxes, so I set up each raid box as individual raid 5 sets through the hardware manager (BIOS) for those boxes. So now they appear as 5 50GB hard drives.

  2. Optionally run lvmdiskscan to see all your drives
  3. Create a partition of type "Linux LVM" on each drive:

    I ran fdisk on each of these "drives" and made 1 primary partition each. I changed the type of the partition to "8e" which is "Linux LVM".

  4. Run pvcreate on each partition:

    pvcreate /dev/cciss/c1d0p1
    pvcreate /dev/cciss/c1d1p1
    pvcreate /dev/cciss/c2d0p1
    pvcreate /dev/cciss/c2d1p1
    pvcreate /dev/cciss/c3d0p1

  5. Create a volume group of these drives using vgcreate:

    vgcreate fileshare /dev/cciss/c1d0p1 /dev/cciss/c1d1p1 /dev/cciss/c2d0p1 /dev/cciss/c2d1p1 /dev/cciss/c3d0p1

    I got a message back saying something to the effect of:
    default physical extent size = 4MB
    maximum lvm is 255.99GB
    doing automatic backup of volume group "fileshare"
    successfully created and activated

    Note: if you need to activate the volume group, you use vgchange -a y fileshare to do that. But we just activated it so that’s not necessary to do right now.

  6. Create a logical volume using lvcreate:

    lvcreate –size 255.99G -n share fileshare

    I found that this was too big because lvcreate tried rounding up to the next number; however, it did tell me that I have a maximum of 65046 extents. At first I didn’t understand that this was an error message. After a while I realized that I was exceeding the maximum number of extents allowed in my configuration. Son instead of specifying by size, let’s specify by number of extents:

    lvcreate -l 65046 -n share fileshare

    Now I have a logical volume of about 250GB.

  7. Format your new logical volume:

    mkfs.ext3 /dev/fileshare/share
    Note: you could fdisk this then format the partition you create, but is that really necessary… In our case I’ll just format the whole thing and not fdisk it.

    (formatting this much takes a few minutes…)

  8. mkdir and /etc/fstab:

    Make a directory to mount your logical volume and add an appropriate entry into /etc/fstab.

    mkdir /storage
    vi /etc/fstab

    /dev/fileshare/share    /storage                ext3    defaults        1 2
  9. Make a new initrd that loads lvm-mod.o:

    First I tried the old “mkinitrd” (RH80) but soon realized it didn’t have knowledge about lvm-mod.o.

    I then tried using lvmcreate_initrd to make the new initrd, but that appeared totally broken to me — it complained about lack of space in the ramdisk.

    I knew this was a simple task so I decided to modify my existing initrd:

    What has to be done is the following 2 tasks:

    • add 1 file (lvm-mod.o) to initrd.img/lib/
    • add 2 lines to initrd.img/linuxrc so it will load lvm-mod.o

    Here’s the steps on modifying the initrd – more info available here: rhdiskmod.html

    1. mkdir /tmp/initrd
    2. cd /tmp/initrd
    3. cp -a /boot/initrd-2.4.18-14smp.img .
    4. mv initrd-2.4.18-14smp.img initrd-2.4.18-14smp.img.gz
    5. gunzip initrd-2.4.18-14smp.img.gz
    6. mkdir 1
    7. mount initrd-2.4.18-14smp.img 1/ -o loop
    8. cp -a /lib/modules/2.4.18-14smp/kernel/drivers/md/lvm-mod.o 1/lib/
    9. vi 1/linuxrc
    10. You’ll see something like the following – in bold I’ve already inserted my load lvm-mod changes just prior to jbd.o and ext3.o:

      echo "Loading scsi_mod module"
      insmod /lib/scsi_mod.o
      echo "Loading sd_mod module"
      insmod /lib/sd_mod.o
      echo "Loading cciss module"
      insmod /lib/cciss.o
      echo "Loading lvm-mod module"
      insmod /lib/lvm-mod.o
      echo "Loading jbd module"
      insmod /lib/jbd.o
      echo "Loading ext3 module"
      insmod /lib/ext3.o

      Then I repacked the initrd and added it to grub:

    11. umount 1/
    12. gzip -n -9 initrd-2.4.18-14smp.img
    13. rmdir 1/
    14. mv initrd-2.4.18-14smp.img.gz /boot/initrd-2.4.18-14smp.img-lvm
    15. cd /tmp
    16. rmdir /tmp/initrd
    17. vi /boot/grub/grub.conf
    18. Here’s my changes to grub — new section in bold:

      title Red Hat Linux (2.4.18-14smp) - lvm
              root (hd0,0)
              kernel /vmlinuz-2.4.18-14smp ro root=LABEL=/
              initrd /initrd-2.4.18-14smp.img-lvm
      title Red Hat Linux (2.4.18-14smp)
              root (hd0,0)
              kernel /vmlinuz-2.4.18-14smp ro root=LABEL=/
              initrd /initrd-2.4.18-14smp.img
      title Red Hat Linux-up (2.4.18-14)
              root (hd0,0)
              kernel /vmlinuz-2.4.18-14 ro root=LABEL=/
              initrd /initrd-2.4.18-14.img
  10. Done. You can mount now with “mount -a” or it will come up on a reboot too.

Shrinking a logical volume

Now that I have a 250GB lvm and a 250GB ext3 formatted partition on there, I want to see about resizing that — First I want to shrink the usage so I can hopefully rorder the order of the drives, then after I add additional drives, I want to expand beyond the 250GB.

I am finding that e2fsadm will resize both the ext3 partition and the LVM.
Here’s the command I’m running:
umount /dev/fileshare/share
e2fsck -f /dev/fileshare/share
e2fsadm -L 45G /dev/fileshare/share

Then to remove the unused volumes, run vgreduce:

vgreduce -a /dev/fileshare

Extending a logical volume

After adding in more drives and fdisking them of partition type 8e, you then add them to the volume group by:

Make a pv (Physical Volume) on that partition:
pvcreate /dev/ida/c0d1p1

Add the new pv to the existing vg (Volume Group):
vgextend /dev/fileshare /dev/ida/c0d1p1
vgextend fileshare /dev/ida/c0d1p1
vgdisplay will reflect your newly added free space

Extend the existing lv (logical volume):
now just unmount the LV called “share”,
e2fsck it,
use vgdisplay to find your free “extents” (Total PE),
then use e2fsadm to expand it:

umount /dev/fileshare/share
e2fsck -f /dev/fileshare/share
vgdisplay /dev/fileshare | grep “Total PE”
e2fsadm -l 3052 /dev/fileshare/share

finally remount your mount point, you can also use vgdisplay and lvdisplay to see your new space, or even df -h.

Leave a Reply

Your email address will not be published. Required fields are marked *