RHCS -Introduction to Clustered Volume Manager – CLVM
What is the Purpose of LVM in Clustered Environment and What are the Limitations?
We all know that that purpose of LVM is to create an abstraction layer over physical storage, allowing the creation of logical storage volumes. This provides much greater flexibility in a number of ways than using physical storage directly, and can be employed on standalone systems or on clusters accessing shared storage. However, in a cluster, some precautions must be taken to ensure the integrity of data stored on shared storage accessed by multiple devices
When a system first scans its storage devices for LVM physical volumes, it reads the metadata of the complete device-mapper infrastructure to assemble a linear, striped, mirrored map of that physical storage. These maps define the boundaries ( that calculates the sizes) of each logical volume and thus what storage addresses can be used for data and filesystem information. After the initial scan, this information may be cached in memory and used later when accessing those storage devices. When multiple systems are accessing the same devices, such as in a cluster, it is critical that they communicate any changes made to the LVM metadata back to the other systems. If one system were to change the metadata and the other system was not notified of this, it could lead to corruption of the LVM metadata or the data residing on the physical volumes.
In Red Hat Cluster Suite, the cluster resource manager (i.e. rgmanager) handles the controlled access to shared storage with out corrupting the data.
There are two methods for configuring LVM in Redhat clustered Environment :
Method 1: Until RHEL6, the it is only possible be configure the storage access in active/passive mode – so that only once cluster node can access the storage at a time. And traditionally this method is named as HA-LVM tagging Variant and some of the below is breif review about HA-LVM tagging:
1. uses local machine locking, that means – locking_type parameter in the global section of the /etc/lvm/lvm.conf file is set to the value ‘1’.
2. This method is available to RHEL4.5+, RHEL5, and RHEL6
3. This option is better suited for clusters utilizing non-clustered filesystems like ext2 and ext3, because it prevents multiple nodes from mounting the same file system at once.
4. Compatible for Stretch Clusters or mutisite Clusters
Method 2: Starting from RHEL6, the new method of LVM introduced – CLVM – Clustered LVM
Clustered Logical volume Manager ( CLVM) is an extension to the lvm2
package called lvm2-cluster
exists in the Cluster Storage channels for Red Hat Enterprise Linux 4 and 5 on Red Hat Network (RHN). This package provides the clvmd
daemon which is responsible for managing clustered volume groups and communicating metadata changes for them back to other cluster members. With this option, the same volume group and logical volumes can be activated and mounted on multiple nodes as long as clvmd
is running and the cluster is quorate. This option is better suited for clusters utilizing GFS or GFS2 filesystems, since they are often mounted on multiple nodes at the same time (and thus require that the logical volume be active on each). This method has easier setup and better prevention of administrative mistakes (eg. like removing a logical volume that is in use).
NOTE: Although clvmd allows for the same volume group to be activated on multiple systems concurrently, this does not make it possible to mount a non-clustered filesystem such as ext2 or ext3 on multiple nodes at the same time. Doing so will likely cause corruption of the filesystem and data. Proper precautions should be taken to ensure no two nodes mount the same non-clustered filesystem at once.
You don’t need CLVM :
– If only one cluster node accessing the shared storage at a time. you can just use LVM.
– If you using a clusterd system for failover where only a single node that access the storage is active at any one time. you can use HA-LVM.
– If you have Stretch ( or Multi Site) Clusters
You need CLVM :
If more than one cluster node simultaneously accessing the shared storage and writing to it.
How do we configure CLVM in Cluster Nodes
In order to enable CLVM on cluster node we need to make following modification to each cluster node. And The Cluster manager should be running before we enable the CLVM on the Cluster nodes.
Step 1: Make sure you have LVM2-CLUSTER packages installed
# yum list lvm2-cluster
Installed Packages
lvm2-cluster.x86_64 2.02.98-9.el6 @base
#
Step 2: Edit the /etc/lvm/lvm.conf to change locking_type to 3 ( i.e. Type 3 uses built-in clustered locking.)
locking_type = 3
Step 3: Check for the Distributed lock management Daemon
# ps -ef|grep dlm_controld
root 1667 1 0 04:44 ? 00:00:00 dlm_controld
root 10429 3322 0 05:12 pts/0 00:00:00 grep dlm_controld
Step 4: Enable CLVMD service
# chkconfig clvmd on
# chkconfig –list clvmd
clvmd 0:off 1:off 2:on 3:on 4:on 5:on 6:off
# service clvmd start
Activating VG(s): 2 logical volume(s) in volume group “vg_gurkulclunode1” now active [ OK ]
Verification of Functioning of the CLVM on Both the Cluster Nodes
On Gurkulclu-node1:
[root@gurkulclu-node1 ~]# fdisk -l | grep -i Disk
Disk /dev/vda: 8589 MB, 8589934592 bytes
Disk identifier: 0x0001d529
Disk /dev/mapper/vg_gurkulclunode1-lv_root: 3934 MB, 3934257152 bytes
Disk identifier: 0x00000000
Disk /dev/mapper/vg_gurkulclunode1-lv_swap: 4127 MB, 4127195136 bytes
Disk identifier: 0x00000000
Disk /dev/sda: 2147 MB, 2147483648 bytes
Disk identifier: 0x0006ae60
Disk /dev/sdb: 2147 MB, 2147483648 bytes
Disk identifier: 0x000c410f
Disk /dev/mapper/1IET_00010001: 2147 MB, 2147483648 bytes
Disk identifier: 0x0006ae60
Disk /dev/mapper/1IET_00010002: 2147 MB, 2147483648 bytes
Disk identifier: 0x000c410f
Disk /dev/mapper/1IET_00010001p1: 2146 MB, 2146765824 bytes
Disk identifier: 0x00000000
[root@gurkulclu-node1 ~]#
Lets chose the shared disk “/dev/mapper/1IET_00010002” for our LVM operations,
and create the LVM volumes on top of it as described below:
Create New partition of 100 MB
[root@gurkulclu-node1 ~]# fdisk /dev/mapper/1IET_00010002
WARNING: DOS-compatible mode is deprecated. It’s strongly recommended to
switch off the mode (command ‘c’) and change display units to
sectors (command ‘u’).
Command (m for help): n
Command action
e extended
p primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-261, default 1): 1
Last cylinder, +cylinders or +size{K,M,G} (1-261, default 261): 100
Command (m for help): w
The partition table has been altered!
Calling ioctl() to re-read partition table.
WARNING: Re-reading the partition table failed with error 22: Invalid argument.
The kernel still uses the old table. The new table will be used at
the next reboot or after you run partprobe(8) or kpartx(8)
Syncing disks.
[root@gurkulclu-node1 ~]#
Reread the partition table with partprobe
[root@gurkulclu-node1 ~]# partprobe /dev/mapper/1IET_00010002
Check that Partition Exists
[root@gurkulclu-node1 ~]# ls -l /dev/mapper/
total 0
lrwxrwxrwx. 1 root root 7 Sep 20 04:44 1IET_00010001 -> ../dm-2
lrwxrwxrwx. 1 root root 7 Sep 20 04:44 1IET_00010001p1 -> ../dm-4
lrwxrwxrwx. 1 root root 7 Sep 20 04:44 1IET_00010002 -> ../dm-3
lrwxrwxrwx. 1 root root 7 Sep 20 05:31 1IET_00010002p1 -> ../dm-5
crw-rw—-. 1 root root 10, 58 Sep 20 04:43 control
lrwxrwxrwx. 1 root root 7 Sep 20 04:43 vg_gurkulclunode1-lv_root -> ../dm-0
lrwxrwxrwx. 1 root root 7 Sep 20 04:43 vg_gurkulclunode1-lv_swap -> ../dm-1
LVM Operations checks:
Create PV at gurkulclu-node1 :
No Volume Exists Except default OS Volume
[root@gurkulclu-node1 ~]# pvs
PV VG Fmt Attr PSize PFree
/dev/vda2 vg_gurkulclunode1 lvm2 a– 7.51g 0
Create a New PV from the new partition we created
[root@gurkulclu-node1 ~]# pvcreate /dev/mapper/1IET_00010002p1
Physical volume “/dev/mapper/1IET_00010002p1” successfully created
Check the New Partition
[root@gurkulclu-node1 ~]# pvs
PV VG Fmt Attr PSize PFree
/dev/mapper/1IET_00010002p1 lvm2 a– 784.39m 784.39m
/dev/vda2 vg_gurkulclunode1 lvm2 a– 7.51g 0
Create New volume Group vgCLUSTER
[root@gurkulclu-node1 ~]# vgcreate vgCLUSTER /dev/mapper/1IET_00010002p1
Clustered volume group “vgCLUSTER” successfully created
Create New Logical Volume
[root@gurkulclu-node1 ~]# lvcreate -L+100M -n clvolume1 vgCLUSTER
Logical volume “clvolume1” created
[root@gurkulclu-node1 ~]#
Verify at Gurkulclu-node2:
We See New PV here
[root@gurkulclu-node2 ~]# pvs
PV VG Fmt Attr PSize PFree
/dev/mapper/1IET_00010002p1 lvm2 a– 784.39m 784.39m
/dev/vda2 vg_gurkulclunode2 lvm2 a– 7.51g 0
We See New VG here
[root@gurkulclu-node2 ~]# vgs
VG #PV #LV #SN Attr VSize VFree
vgCLUSTER 1 1 0 wz–nc 780.00m 680.00m
vg_gurkulclunode2 1 2 0 wz–n- 7.51g 0
We See New LV here
[root@gurkulclu-node2 ~]# lvs
LV VG Attr LSize Pool Origin Data% Move Log Cpy%Sync Convert
clvolume1 vgCLUSTER -wi-a—- 100.00m
lv_root vg_gurkulclunode2 -wi-ao— 3.66g
lv_swap vg_gurkulclunode2 -wi-ao— 3.84g
[root@gurkulclu-node2 ~]#
LV Removal Operations:
Step 1:
[root@gurkulclu-node1 ~]# lvremove vgCLUSTER clvolume1
Do you really want to remove active clustered logical volume clvolume1? [y/n]:
–> i am holding a second here to see how the other node responding to see for similar command
Step 2:
[root@gurkulclu-node2 ~]# lvremove vgCLUSTER clvolume1
–> I see no response here , i go back to first node, to answer the yes or no question
Step 3:
[root@gurkulclu-node1 ~]# lvremove /dev/vgCLUSTER/clvolume1
Do you really want to remove active clustered logical volume clvolume1? [y/n]: n
Logical volume clvolume1 not removed[root@gurkulclu-node1 ~]#
–> I am out of the command now , and go back to node to see , the statue of LV command
Step 4:
[root@gurkulclu-node1 ~]# lvremove /dev/vgCLUSTER/clvolume1
Do you really want to remove active clustered logical volume clvolume1? [y/n]: n
–> Now, i see the question asked for the operation progress.
This is because of the Distributed lock Management ( dlm_controld) running across the nodes. This lock management prevents accidental removals of the volumes when they are in use from other side.
And in the continuation post, I will discuss about the filesystems in the redhat cluster environment. Btw, we have already gone through the Shared storage configuration using ISCSI in our previous post – RHEL 6 : ISCSI Administration Series : Configuring Shared ISCSI Storage
I have 3 node cluster in kvm virtual enviornment , Locking type set to 3 ,all cluster services are running such as cman,rgmanager ,clvmd modclusterd ,riici , red hat version is 6.5 enterprise
But when i try to run pvcrete command from node1, for shared disk(iscsi) i got error
pvcreate fails , lock from node2
pvcreate fails , lock from node3
aslo pvscsn ,vgscan command hang
Please help me , Please help me , Please help me , Please help me , Please help me , Please help me , Please help me ,