Friday, March 2, 2018

Linux: LVM Configuration in Linux

In this blog, we are going to perform some basic LVM operations. The OS is a Centos 6.5 Virtual Machine. A new HDD of size 20 GB has been added.

[root@lvm ~]# lsblk
NAME                           MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sr0                             11:0    1 1024M  0 rom
sda                              8:0    0  100G  0 disk
├─sda1                           8:1    0  500M  0 part /boot
└─sda2                           8:2    0 99.5G  0 part
  ├─vg_cellmgr1-lv_root (dm-0) 253:0    0   50G  0 lvm  /
  ├─vg_cellmgr1-lv_swap (dm-1) 253:1    0  7.9G  0 lvm  [SWAP]
  └─vg_cellmgr1-lv_home (dm-2) 253:2    0 41.7G  0 lvm  /home

[root@lvm ~]# ls /dev/sd*
/dev/sda  /dev/sda1  /dev/sda2

However, as seen above the new HDD is not present in the OS and the device file is also not there. So we need to re-scan the SCSI bus to which the HDD is connected.

[root@lvm ~]# for BUS in /sys/class/scsi_host/host*/scan
> do
> echo "- - -" > ${BUS}
> done

We have re-scanned the SCSI bus using the above script in the shell. This process is followed when adding new Hard Disks. If we need to expand existing HDD, either of the following commands should be run:

echo "1" > /sys/class/scsi_device/device/rescan
echo "1" > /sys/class/block/sdb/device/rescan

We can see below that the HDD has been added.

[root@lvm ~]# lsblk
NAME                           MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sr0                             11:0    1 1024M  0 rom
sda                              8:0    0  100G  0 disk
├─sda1                           8:1    0  500M  0 part /boot
└─sda2                           8:2    0 99.5G  0 part
  ├─vg_cellmgr1-lv_root (dm-0) 253:0    0   50G  0 lvm  /
  ├─vg_cellmgr1-lv_swap (dm-1) 253:1    0  7.9G  0 lvm  [SWAP]
  └─vg_cellmgr1-lv_home (dm-2) 253:2    0 41.7G  0 lvm  /home
sdb                              8:16   0   20G  0 disk

[root@lvm ~]# ls /dev/sd*
/dev/sda  /dev/sda1  /dev/sda2  /dev/sdb

Now, the new HDD is present in the OS as:
/dev/sdb

For creating partitions we will be using fdisk.

[root@lvm ~]# fdisk /dev/sdb

The following steps need to be followed carefully:

  1. Type 'n' for new partition.
  2. Press 'p' for primary partition.
  3. '1' for the partition number. 
  4. Then press 'Enter' for First and Last cylinder to use default values which will result in the partition using up all the space from the disk. 
  5. Type 'p' to list the partitions.

The above steps are executed as shown below.

Command (m for help): n
Command action
   e   extended
   p   primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-2610, default 1):
Using default value 1
Last cylinder, +cylinders or +size{K,M,G} (1-2610, default 2610):
Using default value 2610

Command (m for help): p
   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1               1        2610    20964793+  83  Linux

Type 't' to change the type of the partition created. We will use the code '8e' for LVM.

Command (m for help): t
Selected partition 1
Hex code (type L to list codes): 8e
Changed system type of partition 1 to 8e (Linux LVM)

Command (m for help): p

Disk /dev/sdb: 21.5 GB, 21474836480 bytes
255 heads, 63 sectors/track, 2610 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x114f0a8d

   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1               1        2610    20964793+  8e  Linux LVM

Write the changes using 'w' command.

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.

[root@lvm ~]# partprobe /dev/sdb

[root@lvm ~]# lsblk
NAME                           MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sr0                             11:0    1 1024M  0 rom
sda                              8:0    0  100G  0 disk
├─sda1                           8:1    0  500M  0 part /boot
└─sda2                           8:2    0 99.5G  0 part
  ├─vg_cellmgr1-lv_root (dm-0) 253:0    0   50G  0 lvm  /
  ├─vg_cellmgr1-lv_swap (dm-1) 253:1    0  7.9G  0 lvm  [SWAP]
  └─vg_cellmgr1-lv_home (dm-2) 253:2    0 41.7G  0 lvm  /home
sdb                              8:16   0   20G  0 disk
└─sdb1                           8:17   0   20G  0 part

The partition has been created. Now, we create the LVM. The first step is to create Physical Volume.

[root@lvm ~]# pvcreate /dev/sdb1
  Physical volume "/dev/sdb1" successfully created

The second step is to create Volume Group.

[root@lvm ~]# vgcreate vg_test /dev/sdb1
  Volume group "vg_test" successfully created

[root@lvm ~]# vgdisplay vg_test
  --- Volume group ---
  VG Name               vg_test
  System ID
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  1
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                0
  Open LV               0
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               19.99 GiB
  PE Size               4.00 MiB
  Total PE              5118
  Alloc PE / Size       0 / 0
  Free  PE / Size       5118 / 19.99 GiB
  VG UUID               ViaF4b-G49v-AG5W-PlkE-ydsu-hThU-ClZeGv

[root@lvm ~]# vgs
  VG          #PV #LV #SN Attr   VSize  VFree
  vg_cellmgr1   1   3   0 wz--n- 99.51g     0
  vg_test       1   0   0 wz--n- 19.99g 19.99g

The Volume Group has been created and is verified above. The third step is to create Logical Volume. We will be creating two Logical Volumes, the first will be of 15 GB and the second 5 GB.

[root@lvm ~]# lvcreate -L 15G -n lv_live1 vg_test
  Logical volume "lv_live1" created

[root@lvm ~]# vgdisplay vg_test
  --- Volume group ---
  VG Name               vg_test
  System ID
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  2
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                1
  Open LV               0
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               19.99 GiB
  PE Size               4.00 MiB
  Total PE              5118
  Alloc PE / Size       3840 / 15.00 GiB
  Free  PE / Size       1278 / 4.99 GiB
  VG UUID               ViaF4b-G49v-AG5W-PlkE-ydsu-hThU-ClZeGv

Now, we will create the second Logical Volume named 'lv_live2' with the remaining free space on the Volume Group 'vg_test'.

[root@lvm ~]# lvcreate -l 100%FREE -n lv_live2 vg_test
  Logical volume "lv_live2" created
[root@lvm ~]# lvs |grep lv_live
  lv_live1 vg_test     -wi-a----- 15.00g
  lv_live2 vg_test     -wi-a-----  4.99g

The Logical Volumes have been successfully created.

[root@lvm ~]# lsblk
NAME                           MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sr0                             11:0    1 1024M  0 rom
sda                              8:0    0  100G  0 disk
├─sda1                           8:1    0  500M  0 part /boot
└─sda2                           8:2    0 99.5G  0 part
  ├─vg_cellmgr1-lv_root (dm-0) 253:0    0   50G  0 lvm  /
  ├─vg_cellmgr1-lv_swap (dm-1) 253:1    0  7.9G  0 lvm  [SWAP]
  └─vg_cellmgr1-lv_home (dm-2) 253:2    0 41.7G  0 lvm  /home
sdb                              8:16   0   20G  0 disk
└─sdb1                           8:17   0   20G  0 part
  ├─vg_test-lv_live1 (dm-3)    253:3    0   15G  0 lvm
  └─vg_test-lv_live2 (dm-4)    253:4    0    5G  0 lvm

We need to format the Logical Volumes. We are going to format it to type 'EXT4'.

[root@lvm ~]# mkfs.ext4 /dev/vg_test/lv_live1

[root@lvm ~]# mkfs.ext4 /dev/vg_test/lv_live2

[root@lvm ~]# blkid |grep lv_live
/dev/mapper/vg_test-lv_live1: UUID="3d7cba8b-bd4a-48d3-a43b-013b964c95a0" TYPE="ext4"
/dev/mapper/vg_test-lv_live2: UUID="ec5bafd1-d2aa-4dea-b80b-6c5e3bf3273a" TYPE="ext4"

We will create two folders 'Live1 and Live2' which will be the mount points for the Logical Volumes.

[root@lvm ~]# mkdir /Live{1..2}
[root@lvm ~]# ls /
bin  boot  dev  etc  home  lib  lib64  Live1  Live2  lost+found  media  misc  mnt  net  opt  proc  repo  root  sbin  selinux  srv  sys  tmp  usr  var

The following entries are added to fstab file for permanent mount.

[root@lvm ~]# cat /etc/fstab

#
# /etc/fstab
# Created by anaconda on Wed May  9 10:48:07 2018
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
/dev/mapper/vg_cellmgr1-lv_root /                       ext4    defaults        1 1
UUID=501e7ee7-ce91-41cd-9036-14ab4f2eaecd /boot                   ext4    defaults        1 2
/dev/mapper/vg_cellmgr1-lv_home /home                   ext4    defaults        1 2
/dev/mapper/vg_cellmgr1-lv_swap swap                    swap    defaults        0 0
tmpfs                   /dev/shm                tmpfs   defaults        0 0
devpts                  /dev/pts                devpts  gid=5,mode=620  0 0
sysfs                   /sys                    sysfs   defaults        0 0
proc                    /proc                   proc    defaults        0 0

/dev/mapper/vg_test-lv_live1    /Live1  ext4    defaults        1       1
/dev/mapper/vg_test-lv_live2    /Live2 ext4     defaults        1       1
[root@lvm ~]# mount -a

Now, the mount points are ready.

[root@lvm ~]# df -h
Filesystem                       Size  Used Avail Use% Mounted on
/dev/mapper/vg_cellmgr1-lv_root   50G  6.3G   41G  14% /
tmpfs                            940M   72K  940M   1% /dev/shm
/dev/sda1                        485M   39M  421M   9% /boot
/dev/mapper/vg_cellmgr1-lv_home   41G  176M   39G   1% /home
/dev/mapper/vg_test-lv_live1      15G  166M   14G   2% /Live1
/dev/mapper/vg_test-lv_live2     5.0G  138M  4.6G   3% /Live2

Extend Logical Volume


I have added another 5 GB of HDD on the VM. So we will repeat the previous steps of rescanning the BUS.

[root@lvm ~]# for BUS in /sys/class/scsi_host/host*/scan
> do
> echo "- - -" > ${BUS}
> done
[root@lvm ~]# ls /dev/sd*
/dev/sda  /dev/sda1  /dev/sda2  /dev/sdb  /dev/sdb1  /dev/sdc
[root@lvm ~]# lsblk
NAME                           MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sr0                             11:0    1 1024M  0 rom
sda                              8:0    0  100G  0 disk
├─sda1                           8:1    0  500M  0 part /boot
└─sda2                           8:2    0 99.5G  0 part
  ├─vg_cellmgr1-lv_root (dm-0) 253:0    0   50G  0 lvm  /
  ├─vg_cellmgr1-lv_swap (dm-1) 253:1    0  7.9G  0 lvm  [SWAP]
  └─vg_cellmgr1-lv_home (dm-2) 253:2    0 41.7G  0 lvm  /home
sdb                              8:16   0   20G  0 disk
└─sdb1                           8:17   0   20G  0 part
  ├─vg_test-lv_live1 (dm-3)    253:3    0   15G  0 lvm  /Live1
  └─vg_test-lv_live2 (dm-4)    253:4    0    5G  0 lvm  /Live2
sdc                              8:32   0    5G  0 disk

Creating Physical Volume from the new HDD and extending the Volume Group 'vg_test'.

[root@lvm ~]# pvcreate /dev/sdc
  Physical volume "/dev/sdc" successfully created

[root@lvm ~]# vgs
  VG          #PV #LV #SN Attr   VSize  VFree
  vg_cellmgr1   1   3   0 wz--n- 99.51g    0
  vg_test       1   2   0 wz--n- 19.99g    0
[root@lvm ~]# vgextend vg_test /dev/sdc
  Volume group "vg_test" successfully extended
[root@lvm ~]# vgs
  VG          #PV #LV #SN Attr   VSize  VFree
  vg_cellmgr1   1   3   0 wz--n- 99.51g    0
  vg_test       2   2   0 wz--n- 24.99g 5.00g

Extending the Logical Volume with free space of the Volume Group 'vg_test'.

[root@lvm ~]# lvextend -l +100%FREE /dev/vg_test/lv_live1
  Extending logical volume lv_live1 to 20.00 GiB
  Logical volume lv_live1 successfully resized
[root@lvm ~]# lvs |grep live
  lv_live1 vg_test     -wi-ao---- 20.00g
  lv_live2 vg_test     -wi-ao----  4.99g

[root@lvm ~]# df -h
Filesystem                       Size  Used Avail Use% Mounted on
/dev/mapper/vg_cellmgr1-lv_root   50G  6.3G   41G  14% /
tmpfs                            940M   72K  940M   1% /dev/shm
/dev/sda1                        485M   39M  421M   9% /boot
/dev/mapper/vg_cellmgr1-lv_home   41G  176M   39G   1% /home
/dev/mapper/vg_test-lv_live1      15G  166M   14G   2% /Live1
/dev/mapper/vg_test-lv_live2     5.0G  138M  4.6G   3% /Live2

However, the partition size has not been updated. So we will resize the Filesystem.

[root@lvm ~]# resize2fs /dev/vg_test/lv_live1
resize2fs 1.41.12 (17-May-2010)
Filesystem at /dev/vg_test/lv_live1 is mounted on /Live1; on-line resizing required
old desc_blocks = 1, new_desc_blocks = 2
Performing an on-line resize of /dev/vg_test/lv_live1 to 5241856 (4k) blocks.
The filesystem on /dev/vg_test/lv_live1 is now 5241856 blocks long.

[root@lvm ~]# df -h
Filesystem                       Size  Used Avail Use% Mounted on
/dev/mapper/vg_cellmgr1-lv_root   50G  6.3G   41G  14% /
tmpfs                            940M   72K  940M   1% /dev/shm
/dev/sda1                        485M   39M  421M   9% /boot
/dev/mapper/vg_cellmgr1-lv_home   41G  176M   39G   1% /home
/dev/mapper/vg_test-lv_live1      20G  170M   19G   1% /Live1
/dev/mapper/vg_test-lv_live2     5.0G  138M  4.6G   3% /Live2

Reduce LVM Size


First we need to unmount the partition mount point and then resize the filesystem to a size less than the size we are going to reduce the LVM to. Here, we are resizing the filessystem to 10 GB which is less than the size we will reduce the LVM to which is 12 GB.



[root@lvm ~]# e2fsck -f /dev/vg_test/lv_live1
e2fsck 1.41.12 (17-May-2010)
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Pass 5: Checking group summary information
/dev/vg_test/lv_live1: 11/1310720 files (0.0% non-contiguous), 125585/5241856 blocks

[root@lvm ~]# resize2fs /dev/vg_test/lv_live1 10G
resize2fs 1.41.12 (17-May-2010)
Resizing the filesystem on /dev/vg_test/lv_live1 to 2621440 (4k) blocks.
The filesystem on /dev/vg_test/lv_live1 is now 2621440 blocks long.

[root@lvm ~]# lvreduce -L 12G /dev/vg_test/lv_live1
  WARNING: Reducing active logical volume to 12.00 GiB
  THIS MAY DESTROY YOUR DATA (filesystem etc.)
Do you really want to reduce lv_live1? [y/n]: y
  Reducing logical volume lv_live1 to 12.00 GiB
  Logical volume lv_live1 successfully resized
[root@lvm ~]# resize2fs /dev/vg_test/lv_live1
resize2fs 1.41.12 (17-May-2010)
Resizing the filesystem on /dev/vg_test/lv_live1 to 3145728 (4k) blocks.
The filesystem on /dev/vg_test/lv_live1 is now 3145728 blocks long.

Now we can see below that the LVM size has reduced to 12 GB.

[root@lvm ~]# mount -a
[root@lvm ~]# df -h
Filesystem                       Size  Used Avail Use% Mounted on
/dev/mapper/vg_cellmgr1-lv_root   50G  6.3G   41G  14% /
tmpfs                            940M   72K  940M   1% /dev/shm
/dev/sda1                        485M   39M  421M   9% /boot
/dev/mapper/vg_cellmgr1-lv_home   41G  176M   39G   1% /home
/dev/mapper/vg_test-lv_live2     5.0G  138M  4.6G   3% /Live2
/dev/mapper/vg_test-lv_live1      12G  166M   12G   2% /Live1

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.