Blog: How-Tos
A Logical Volume Manager / LVM primer for Linux
About LVM
LVM is an abstraction layer that provides block devices (same kind of disk partitions). This is done by using 3 layers:
- physical volumes (PV) – disk partitions;
- volume groups (VG) – aggregates of physical volumes, could be across multiple disks or multiple partitions, whatever;
- logical volumes (LV) – actual usable block devices that will contain file systems;
In this scheme, LVs can exist anywhere (or indeed be split across) multiple partitions or disks. LVs can be created, grown, shrunk, snapshotted, formatted, mounted, all online.
Using LVM it’s possible to add storage into a VG (by adding a new disk), extend LVs or create new ones to use the added space. It’s also possible to remove PVs from a volume group (thus shrinking it) to replace a drive.
LVM can be mixed with software (md) raid (normally used for PVs) and luks encryption (normally applied to LVs).
Main commands
- View PVs, VGs and LVs:
lvs vgs pvs
More verbose versions are lvdisplay, vgdisplay and pvdisplay.
- Create physical volume:
pvcreate <device>
- Create volume group or add physical volumes (if it already exists):
vgcreate <vg_name> <device> vgextend <vg_name> <device>
- Create logical volume
lvcreate -L <size> -n <lv_name> <vg_name>
- Modify LVs (extend, shrink, rename, deactivate, etc)
lvchange # see man page
- Modify VGs (extend, reduce, modify, etc)
vgextend <vg_name> <new_device> vgreduce <vg_name> <device_to_be_removed>
To reduce a VG, the PV to be removed, must be empty. Therefore it may be necessary to move data off of it in advance. Please see pvmove below.
vgchange <vg_name> # see man page
- Move data blocks out across physical volumes. This operation is atomic, meaning it either works fully or is fully rolled back. So it’s perfectly safe:
pvmove <old_pv> <new_pv>
Device names and locations
LVM works alongside the device-mapper subsystem. VGs are mapped into /dev/<vg> directories and LVs are created as block device files under /dev/<vg>/<lv>. These are symlinks into the real device-mapper devices /dev/dm-* (which you don’t need to care about). There’s another mapping of LVM block devices under /dev/mapper/<vg>-<lv> which again symlinks to the actual dm devices.
Examples
For these examples I’ll be using 2 disks which in this case will be made out of flat files, but the concept is the same. Just consider /dev/mapper/loop0p1 or /dev/mapper/loop1p2 equivalent to /dev/sda1 and /dev/sdb2.
Create or set partitions to be used by LVM (partition type fd on dos disk labels)
Disk /dev/loop0: 42 MiB, 44040192 bytes, 86016 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disklabel type: dos Disk identifier: 0x3d2681ed Device Boot Start End Sectors Size Id Type /dev/loop0p1 2048 43007 40960 20M fd Linux raid autodetect /dev/loop0p2 43008 86015 43008 21M fd Linux raid autodetect
Disk /dev/loop1: 2.5 GiB, 2621440000 bytes, 5120000 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disklabel type: dos Disk identifier: 0xe13931af Device Boot Start End Sectors Size Id Type /dev/loop1p1 2048 411647 409600 200M fd Linux raid autodetect /dev/loop1p2 411648 821247 409600 200M fd Linux raid autodetect /dev/loop1p3 821248 5119999 4298752 2.1G fd Linux raid autodetect
Create logical volume and file system
Create/add physical volumes
# pvcreate /dev/mapper/loop0p1 # pvcreate /dev/mapper/loop1p1
Gaze at your creation (notice no VG)
# pvs PV VG Fmt Attr PSize PFree /dev/mapper/loop0p1 lvm2 --- 20.00m 20.00m /dev/mapper/loop1p1 lvm2 --- 200.00m 200.00m
Create volume group (first command could have included both PVs)
# vgcreate vg1 /dev/mapper/loop0p1 # vgextend vg1 /dev/mapper/loop1p1 Volume group "vg1" successfully extended
# pvs PV VG Fmt Attr PSize PFree /dev/mapper/loop0p1 vg1 lvm2 a-- 16.00m 16.00m /dev/mapper/loop1p1 vg1 lvm2 a-- 196.00m 196.00m
# vgs VG #PV #LV #SN Attr VSize VFree vg1 2 0 0 wz--n- 212.00m 212.00m
vg1 is now physically spread across the 2 partitions.
Create an LV that wouldn’t fit on /dev/mapper/loop0p1
# lvcreate -n lv1 -L 100M vg1
Check how the space has been taken up on the PVs and how the VG is now reporting less free space:
# pvs PV VG Fmt Attr PSize PFree /dev/mapper/loop0p1 vg1 lvm2 a-- 16.00m 0 /dev/mapper/loop1p1 vg1 lvm2 a-- 196.00m 112.00m
# vgs VG #PV #LV #SN Attr VSize VFree vg1 2 1 0 wz--n- 212.00m 112.00m
# lvs LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert lv1 vg1 -wi-a----- 100.00m
lv1 is now a block device where a file system can be created
# mkfs.ext2 /dev/vg1/lv1 (...) # mount /dev/vg1/lv1 /mnt/test # mount | grep /mnt/test /dev/mapper/vg1-lv1 on /mnt/test type ext2 (rw,relatime,block_validity,barrier,user_xattr,acl)
Created a 10M file on the mount point, checked df:
/dev/mapper/vg1-lv1 97M 12M 81M 13% /mnt/test
Looking good!
Retire disk loop0
If we wanted to retire /dev/loop0, we’d need to free up the device and remove it from the VG. Fortunately the VG has enough space to loose that PV and still hold all LVs. Otherwise more storage would need to be added first.
Move all data out of PV /dev/mapper/loop0p1
# pvmove /dev/mapper/loop0p1 /dev/mapper/loop1p1
# pvs PV VG Fmt Attr PSize PFree /dev/mapper/loop0p1 vg1 lvm2 a-- 16.00m 16.00m /dev/mapper/loop1p1 vg1 lvm2 a-- 196.00m 96.00m
Reduce volume group to loose /dev/loop0p1
# vgreduce vg1 /dev/loop0p1 Removed "/dev/mapper/loop0p1" from volume group "vg1"
Remove physical volume from LVM pool
# pvremove /dev/loop0p1 Labels on physical volume "/dev/mapper/loop0p1" successfully wiped.
# pvs PV VG Fmt Attr PSize PFree /dev/mapper/loop1p1 vg1 lvm2 a-- 196.00m 96.00m Done!
Extend existing LV
Within VG:
# lvextend /dev/vg1/lv1 -L+20M Size of logical volume vg1/lv1 changed from 100.00 MiB (25 extents) to 120.00 MiB (30 extents). Logical volume vg1/lv1 successfully resized.
The extend command can take percentage values, using ‘-l’ against several keywords like +20%FREE to take 20% of the VG’s free space pool. Full spec below:
{-l|–extents [+]LogicalExtentsNumber[%{VG|LV|PVS|FREE|ORIGIN}] |
Extend filesystem (dependent on the actual filesystem. this is how it works on ext2)
(before)
/dev/mapper/vg1-lv1 97M 12M 81M 13% /mnt/test
(resize)
# resize2fs /dev/vg1/lv1 resize2fs 1.43.4 (31-Jan-2017) Filesystem at /dev/vg1/lv1 is mounted on /mnt/test; on-line resizing required old_desc_blocks = 1, new_desc_blocks = 1 The filesystem on /dev/vg1/lv1 is now 122880 (1k) blocks long.
(after)
/dev/mapper/vg1-lv1 117M 12M 100M 11% /mnt/test
Easy!
Beyond VG size:
Needs more storage added, of course.
# pvcreate /dev/mapper/loop1p2 Physical volume "/dev/mapper/loop1p2" successfully created.
# pvs PV VG Fmt Attr PSize PFree /dev/mapper/loop1p1 vg1 lvm2 a-- 196.00m 76.00m /dev/mapper/loop1p2 lvm2 --- 200.00m 200.00m
(notice the unassigned PV)
# vgextend vg1 /dev/mapper/loop1p2 Volume group "vg1" successfully extended
# vgs VG #PV #LV #SN Attr VSize VFree vg1 2 1 0 wz--n- 392.00m 272.00m
# lvextend -l 100%FREE vg1/lv1 Size of logical volume vg1/lv1 changed from 120.00 MiB (30 extents) to 272.00 MiB (68 extents). Logical volume vg1/lv1 successfully resized.
# lvs LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert lv1 vg1 -wi-ao---- 272.00m
# resize2fs /dev/vg1/lv1 resize2fs 1.43.4 (31-Jan-2017) Filesystem at /dev/vg1/lv1 is mounted on /mnt/test; on-line resizing required old_desc_blocks = 1, new_desc_blocks = 2 The filesystem on /dev/vg1/lv1 is now 278528 (1k) blocks long.
# df -h | grep /mnt/test /dev/mapper/vg1-lv1 264M 13M 240M 5% /mnt/test
Tear it down
Did I mention loop0 and loop1 are flat files on a USB stick? Yeah, need to disconnect to take it home. This is slightly off the remit of the LVM primer but I wanted to show what’s involved with removing an LVM VG from a system without rebooting.
Let’s observe the current state of the system. Notice the attribute ‘ao’ (available, online) on the LV and the lack of ‘x’ attribute on the VG.
# lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
lv1 vg1 -wi-ao---- 272.00m
# vgs VG #PV #LV #SN Attr VSize VFree vg1 2 1 0 wz--n- 392.00m 120.00m
# umount /mnt/test
Make LVs unavailable (available=no)
# lvchange -an vg1/lv1
Alternatively you can do this automatically for all LVs on a VG (already done):
# vgchange -an vg1 0 logical volume(s) in volume group "vg1" now active
# vgexport vg1 Volume group "vg1" successfully exported
Notice how the ‘a’ (available) attribute dissapeared of the LV and the ‘x’ (export) one appeared on the VG and PVs.
# lvs LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert lv1 vg1 -wi------- 272.00m
# vgs
VG #PV #LV #SN Attr VSize VFree
vg1 2 1 0 wzx-n- 392.00m 120.00m
# pvs PV VG Fmt Attr PSize PFree /dev/mapper/loop1p1 vg1 lvm2 ax- 196.00m 0 /dev/mapper/loop1p2 vg1 lvm2 ax- 196.00m 120.00m
Release partition maps [off scope for the LVM primer]:
# kpartx -d -v /dev/loop0 del devmap : loop0p2 del devmap : loop0p1
# kpartx -d -v /dev/loop1 del devmap : loop1p3 del devmap : loop1p2 del devmap : loop1p1
(both these commands would have failed, had all VGs on the PVs been exported since they would be in use)
Unmap loopback files:
# losetup -d /dev/loop0 # losetup -d /dev/loop1
Here’s the flat files where the disks were created and manipulated (all the data is still there, it can simply be mapped and imported with vgimport).
# ls -ld disk1 disk2 -rw-r--r-- 1 root root 44040192 Jun 21 16:10 disk1 -rw-r--r-- 1 root root 2621440000 Jun 21 16:24 disk2
# file disk1 disk2 disk1: DOS/MBR boot sector; partition 1 : ID=0xfd, start-CHS (0x0,32,33), end-CHS (0x2,172,42), startsector 2048, 40960 sectors; partition 2 : ID=0xfd, start-CHS (0x2,172,43), end-CHS (0x5,90,21), startsector 43008, 43008 sectors disk2: DOS/MBR boot sector; partition 1 : ID=0xfd, start-CHS (0x0,32,33), end-CHS (0x19,159,6), startsector 2048, 409600 sectors; partition 2 : ID=0xfd, start-CHS (0x19,159,7), end-CHS (0x33,30,43), startsector 411648, 409600 sectors; partition 3 : ID=0xfd, start-CHS (0x33,30,44), end-CHS (0x13e,179,53), startsector 821248, 4298752 sectors
Thank you, good night.
Caveats
Due to the way LVM works on the kernel, it is very dangerous to do online block copies of disks holding LVM PVs or to connect a clone of a disk to the computer. Data corruption is guaranteed. This is because the kernel expects unique internal identifiers on PVs and if multiple volumes with the same identifier appear, only one will get data blocks written to, but it’s not guaranteed which. So either the main disk or the clone will be corrupted. This is mostly irrelevant on servers but it’s worth knowing.
It is an abstraction layer, and as such incurs an overhead. Totally worth the cost in my view.
The kernel and the LVM system are designed to automatically pick up and import LVM PVs and setup the supporting structures to process VGs and LVs, which are not straight forward to unstick online if you want to remove a disk (i.e. remove the PV). To “unstick” an LVM VG, you’d need to have all respective LVs offline (lvchange -an <vg>/<lv>), the VG offline (vgchange -an <vg>) and export the VG (vgexport <vg>). Only then will the kernel release the devices and let you safely disable the relevant PVs.