This page looks best with JavaScript enabled

Lvm Snapshots

 ·  ☕ 8 min read

Prepare Lab

Create a file and use it as a block device
The size of the file is 500 blocks of 4Mb (block size) 4M * 500 ~= 2Gb

dd if=/dev/zero of=fileDevice bs=4M count=500
# 1500+0 records in
# 500+0 records out
# 2097152000 bytes (2.1 GB, 2.0 GiB) copied, 3.4276 s, 612 MB/s

With losetup you can attach a loop device to the fileDevice regular file or block device

Before mounting the file we need to check that there is a free /dev/loopX loopback device that we can use to represent our new block device.

losetup -f

Attach

losetup /dev/loop0 fileDevice

Check status

losetup
# NAME       SIZELIMIT OFFSET AUTOCLEAR RO BACK-FILE       DIO LOG-SEC
# /dev/loop0         0      0         0  0 <path>/fileDevice   0     512

Physical Volume

Create Physical Volume

Create a 2GB LVM2 Physical Volume.

pvcreate /dev/loop0

Check Physical Volume

pvs - Display information about physical volumes

pvs
#  PV                     VG        Fmt  Attr PSize   PFree
#  /dev/loop0                       lvm2 ---    1.95g 1.95g

pvdisplay - Display various attributes of physical volume(s)

pvdisplay
#  "/dev/loop0" is a new physical volume of "1.95 GiB"
#  --- NEW Physical volume ---
#  PV Name               /dev/loop0
#  VG Name
#  PV Size               1.95 GiB
#  Allocatable           NO
#  PE Size               0
#  Total PE              0
#  Free PE               0
#  Allocated PE          0
#  PV UUID               Ddjibt-s5lv-6ouy-44Xx-vgFN-4zzO-15NKV3

Group Volume

Create Group Volume

Create Group Volume with Pysical Extents (PE) of 4Mb

 vgcreate VG0 /dev/loop0
#  Volume group "VG0" successfully created

Check Group Volume

vgs - Display information about volume groups

vgs
#  VG        #PV #LV #SN Attr   VSize   VFree
#  VG0         1   0   0 wz--n-  <1.95g <1.95g

vgdisplay - Display volume group information

vgdisplay
#  --- Volume group ---
#  VG Name               VG0
#  System ID
#  Format                lvm2
#  Metadata Areas        1
#  Metadata Sequence No  1
#  VG Access             read/write
#  VG Status             resizable
#  MAX LV                0
#  Cur LV                0
#  Open LV               0
#  Max PV                0
#  Cur PV                1
#  Act PV                1
#  VG Size               <1.95 GiB
#  PE Size               4.00 MiB
#  Total PE              499
#  Alloc PE / Size       0 / 0
#  Free  PE / Size       499 / <1.95 GiB
#  VG UUID               Jtijus-SDrW-p8FF-DMb2-scdk-JfGp-wnhLYl

Logical Volume

Create Logical Volume

lvcreate -n <name> -l <NUMBER OF PE> <volume>

lvcreate -n LV0 -l 50 VG0
#  Logical volume "LV0" created.

lvcreate -n <name> -L <size> <volume>

lvcreate -n LV0 -L 200 VG0
#  Logical volume "LV0" created.

Check Logical Volume

The LVM functionality is mostly handled by the device mapper that provides virtual block device and redirects any access to these virtual devices to another low-level device.

Check the new created device.

ls -l /dev/mapper/VG0-LV0
# lrwxrwxrwx 1 root root 7 Sep  7 18:28 /dev/mapper/VG0-LV0 -> ../dm-3

ls -l /dev/VG0/LV0
# lrwxrwxrwx 1 root root 7 Sep  7 18:52 /dev/VG0/LV0 -> ../dm-3

lvs - Display information about logical volumes

lvs
#  LV     VG        Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
#  LV0    VG0       -wi-a----- 200.00m

lvdisplay - Display information about a logical volume

lvdisplay
#  --- Logical volume ---
#  LV Path                /dev/VG0/LV0
#  LV Name                LV0
#  VG Name                VG0
#  LV UUID                LyKSfY-tsyY-KUDY-d6lo-g3eG-hhbj-7BclBv
#  LV Write Access        read/write
#  LV Creation host, time bekaar, 2020-09-07 18:28:09 +0200
#  LV Status              available
#  # open                 0
#  LV Size                200.00 MiB
#  Current LE             50
#  Segments               1
#  Allocation             inherit
#  Read ahead sectors     auto
#  - currently set to     256
#  Block device           254:3

Remove Logical Volume

you can remove easily a logical volume

umount /dev/mapper/VG0-LV0
lvremove /dev/mapper/VG0-LV0
# Do you really want to remove active logical volume VG0/LV0? [y/n]: y
#   Logical volume "LV0" successfully removed

Create a FileSystem

mkfs.ext4 /dev/VG0/LV0
mount /dev/VG0/LV0 /mnt/lvm

Just to make some tests copy some files inside

cd /mnt/lvm
mkdir backup
cp -rf /etc/* .

Snapshot of Logical Volume

A snapshot is a logical volume which allows you to access to the device in a particluar moment in time (the moment in which the snapshot is created). A snapshot memorizes the changes that will occurs on the original device from the creation moment. The size of the snapstot will only depend on the changes on the origian device. Snapshot only stores the changes and is a copy-on-write (COW) operation.

To take a snapshot the syntax is identical to a create except for the -s.

lvm lvcreate --size <size> --snapshot  --name <snap name> <dev>

The size of the snapshot is the max size of changes on the original device

First let’s see out device mapper table using dmsetup

dmsetup table
# VG0-LV0: 0 409600 linear 7:0 2048

Now create a snapshot

lvcreate -s -n SNAP1 -L 200M /dev/VG0/LV0
#  Logical volume "SNAP1" created.

This will create a new device

ls -l /dev/VG0/SNAP1
# lrwxrwxrwx 1 root root 7 set  8 11:18 /dev/VG0/SNAP1 -> ../dm-3

Lets' control our device mapper table now

dmsetup table
# VG0-LV0-real: 0 409600 linear 7:0 2048
# VG0-SNAP1-cow: 0 409600 linear 7:0 411648
# VG0-LV0: 0 409600 snapshot-origin 254:4
# VG0-SNAP1: 0 409600 snapshot 254:4 254:5 P 8

and

ls -l /dev/mapper/
# lrwxrwxrwx 1 root root       7 set  8 11:18 VG0-LV0 -> ../dm-0
# lrwxrwxrwx 1 root root       7 set  8 11:18 VG0-LV0-real -> ../dm-1
# lrwxrwxrwx 1 root root       7 set  8 11:18 VG0-SNAP1 -> ../dm-3
# lrwxrwxrwx 1 root root       7 set  8 11:18 VG0-SNAP1-cow -> ../dm-2

As you can notice now there are 4 devices now 2 as espected.
The creation of a snapshot creates another layer of mapping.
Two wrapper LVs are created for the snapshot, VG0-LV0-real and VG0-SNAP1-cow.

Let’s check why this happens.

The existing device for the original LV (VG0-LV0) is no longer of type linear, but instead of type snapshot-origin.
It refers to the device VG0-LV0-real.

The new device VG0-SNAP1 is of type snapshot and refers to the device VG0-LV0-real, too, and also to VG0-LV0-cow.

When a write request is done on the original logical volume (VG0-LV0) two operations will occur:

  • The data before to be changed is written on the snapshot [COW] (from VG0-LV0-real to vg0-snap1-cow)
  • The data is written on the logical partition (VG0-LV0)

When a read request is done on the snapshot logical volume (VG0-SNAP1) only one of this two operations will occur:

  • The data is not changed so the read reqauest id forwarded to the original logical volume (VG0-SNAP1 to VG0-LV0-real)
  • The data is changed so the data is read from the snapshot (VG0-SNAP1-cow)

Test Snapshot

Let’s remove some file from the lvm mounted filesystem

rm /mnt/lvm/backup/pam.conf

Mount snaphot

mkdir /mnt/lvm_snaphot
mount /dev/VG0/SNAP1 /mnt/lvm_snaphot

The snapshot size shows in the COW-table size.

lvdisplay /dev/VG0/LV0
#  --- Logical volume ---
#  LV Path                /dev/VG0/LV0
#  LV Name                LV0
#  VG Name                VG0

#  LV Creation host, time istiophorus, 2020-09-08 11:04:26 +0200
#  LV snapshot status     source of
#                         SNAP1 [active]
#  LV Status              available
#  LV Size                200,00 MiB
#  Current LE             50


lvdisplay /dev/VG0/SNAP1
#  --- Logical volume ---
#  LV Path                /dev/VG0/SNAP1
#  LV Name                SNAP1
#  VG Name                VG0

#  LV Creation host, time istiophorus, 2020-09-08 11:18:39 +0200
#  LV snapshot status     active destination for LV0
#  LV Status              available
#  LV Size                200,00 MiB
#  Current LE             50
#  COW-table size         200,00 MiB
#  COW-table LE           50
#  Allocated to snapshot  0,04%
#  Snapshot chunk size    4,00 KiB

If you keep deleting/changing files in the original LVM FS we can control how SNAP1 changes

#  Allocated to snapshot  0,04%

You can also check

lvs -o +devices

#  LV    VG  Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert Devices
#  LV0   VG0 owi-aos--- 200,00m                                                     /dev/loop0(0)
#  SNAP1 VG0 swi-aos--- 200,00m      LV0    0,04                                   /dev/loop0(50)

Revert Snapshot

With both the Logical Volumes now unmounted we use the lvconvert command to merge the snapshot with the origin.
You make use the –b option to background the process.
Remenber that tThe snapshot will removed at the end of the process.

lvconvert --merge /dev/VG0/SNAP1
#  Merging of volume VG0/SNAP1 started.
#  VG0/LV0: Merged: 25,71%
#  ...
#  VG0/LV0: Merged: 95,81%
#  VG0/LV0: Merged: 100,00%

Backup

dd if=/dev/VG0/SNAP1 | gzip > backup.gz

Notice that the size of the backup will be equal to the size of orginal data, not the max size of the snapshot.
Once the backup had been done and saved we can remove the snapshot

lvm lvremove /dev/VG0/SNAP1

What is LVM snapshot is full

If the size of the changes made to the LVM files grow in size the snaphot volume became 100% full and the snapshopt will be signed as corrupted.

lvdisplay /dev/VG0/SNAP1

# LV snapshot status     INACTIVE destination for LV0

The mapper table will show you that snapshot is marked as invalid

dmsetup status

the only thing you can do is remove the snapshot

lvremove -f /dev/mapper/VG0-SNAP1

to avoid this situation just remeber to check the snapshot size and eventually exatend the logical volume

lvextend -L +3G /dev/mapper/VG0-SNAP1

Take care of inserting the sign + before the size. This is to indicate to enlarge the acual size of 3Gb and not to set the total size to 3Gb.