In this final post of this series (probably not!), we are going to configure the two nodes to completely remove the cluster services.
Stop and disable the cluster services
First we remove the service using the ccs tool.
Then we stop and permanently disable the cluster services on both the nodes using a single command.
We can check the status of the volume group of the shared storage.
Now, we change the volume group from clustered to non-clustered by changing the locking type.
We should note that if the cluster services were still running, the following command would have sufficed to change the volume group attributes.
Now, the shared_vg volume group is visible.
Then we activate the volume group.
We can also check the logical volume "ha_lv"
Now, we need to mount the LVM device to the mount point.
We can confirm the mount point below after mounting the mount points using the commands `mount -a` and `df -h`.
Finally, we have completed all the steps to remove the cluster services from both the nodes. We can then power off the other node and remove the shared RDM disks. We need to be careful to not tick the option "Delete files from datastore" as the disks are still being used by the second node server N2.
Stop the cluster services
Check the cluster status
[root@N2 ~]# clustat
Cluster Status for HAcluster @ Fri Feb 1 15:01:46 2019
Member Status: Quorate
Member Name ID Status
------ ---- ---- ------
hb1.off.com 1 Online, rgmanager
hb2.off.com 2 Online, Local, rgmanager
/dev/block/8:32 0 Online, Quorum Disk
Service Name Owner (Last) State
------- ---- ----- ------ -----
service:RS hb2.off.com started
Stop and disable the cluster services
First we remove the service using the ccs tool.
[root@N2 ~]# ccs -h localhost --rmservice RS
Then we stop and permanently disable the cluster services on both the nodes using a single command.
[root@N2 ~]# ccs -h localhost --stopall
Change the Volume Group type
In the second post of this series, I configured HA LVM and the locking type was 1. However, for this post when I re-setup the cluster, I used CLVM for setting up LVM and changed the locking type to 3. The volume group attributes were setup as "cy" meaning "clustered - yes".
[root@N2 ~]# vgdisplay shared_vg --config 'global {locking_type = 0}' | grep Clustered
WARNING: Locking disabled. Be careful! This could corrupt your metadata.
Clustered yes
Now, we change the volume group from clustered to non-clustered by changing the locking type.
[root@N2 ~]# vgchange -cn shared_vg --config 'global {locking_type = 0}'
WARNING: Locking disabled. Be careful! This could corrupt your metadata.
Volume group "shared_vg" successfully changed
We should note that if the cluster services were still running, the following command would have sufficed to change the volume group attributes.
[root@N2 ~]# vgchange -cn shared_vg
Now, the shared_vg volume group is visible.
[root@N2 ~]# vgs
VG #PV #LV #SN Attr VSize VFree
shared_vg 1 1 0 wz--n- 15.00g 0
vg_n2 1 2 0 wz--n- 49.51g 0
Then we activate the volume group.
[root@N2 ~]# vgchange -ay shared_vg
1 logical volume(s) in volume group "shared_vg" now active
We can also check the logical volume "ha_lv"
[root@N2 ~]# lvs
LV VG Attr LSize Pool Origin Data% Move Log Cpy%Sync Convert
ha_lv shared_vg -wi-a----- 15.00g
lv_root vg_n2 -wi-ao---- 45.63g
lv_swap vg_n2 -wi-ao---- 3.88
Now, we need to mount the LVM device to the mount point.
[root@N2 ~]# cat /etc/fstab
#
# /etc/fstab
# Created by anaconda on Wed Jan 16 12:03:29 2019
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
/dev/mapper/vg_n2-lv_root / ext4 defaults 1 1
UUID=01a596ef-a4f8-4e28-9e1b-b54b2b9695aa /boot ext4 defaults 1 2
/dev/mapper/vg_n2-lv_swap swap swap defaults 0 0
tmpfs /dev/shm tmpfs defaults 0 0
devpts /dev/pts devpts gid=5,mode=620 0 0
sysfs /sys sysfs defaults 0 0
proc /proc proc defaults 0 0
/dev/shared_vg/ha_lv /ClusterMountPoint ext4 defaults 1 2
We can confirm the mount point below after mounting the mount points using the commands `mount -a` and `df -h`.
[root@N2 ~]# mount -a
[root@N2 ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/vg_n2-lv_root 45G 5.9G 37G 14% /
tmpfs 1.9G 0 1.9G 0% /dev/shm
/dev/sda1 485M 39M 421M 9% /boot
/dev/mapper/shared_vg-ha_lv 15G 153M 14G 2% /ClusterMountPoint
Finally, we have completed all the steps to remove the cluster services from both the nodes. We can then power off the other node and remove the shared RDM disks. We need to be careful to not tick the option "Delete files from datastore" as the disks are still being used by the second node server N2.
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.