VCS Learning – I/O Fencing In action [ Video ]

This post is all about the purpose of I/O fencing and how i/o fencing works to avoid the split brain condition?.

Normally, I do spend  4 to 6 hours to post a technical article  with complete details, but for this one I have spent lot more time. And the reason is “my promise to you all to explain the complex technology in simple terms”.  And I believe, the I/O fencing is one of the complex and confusing topic that most people try to avoid it during their learning ( even I did the same), but unfortunately we cannot skip this part for so long due to the increased complexity of our UNIX networks. To make the concept simple to understand, I have made whole setup to simulate the real time failure.  Please Go Ahead …..

My Lab Configuration:

Gurkulvcs1 – Solaris 10 with VCS 5.0 + VxVM MP1

Gurkulvcs2 – Solaris 10 with VCS 5.0 + VxVM MP1

ISCSI SAN  SERVER – Sun ZFS 7000 Appliance

Storage Mulipath – DMP enabled  for all the VxVM Disks

Video to Explain Split Brain Condition ( Refresh Screen, if you don’t see the Video)

 
Configuring I/O Fencing for Veritas Cluster
 
 
Some points before you go for actual configuration part:
In this post I have not covered the vxfendDG disk group creation using the 3 shared disks ( in this case they from ISCSI server).  And it was in deported state before I actually started the below configuration.
 
Step 1: Create Veritas Diskgroup ( example : vxfenceDG) for VxFencing, using shared disks 

root@gurkulvcs1#vxdisk -o alldgs list

DEVICE       TYPE            DISK         GROUP        STATUS
aluadisk0_0  auto:cdsdisk    –            (vxfenceDG)  online
aluadisk0_1  auto:cdsdisk    –            (vxfenceDG)  online
aluadisk0_2  auto:cdsdisk    –            (vxfenceDG)  online
aluadisk0_3  auto:cdsdisk    –            –            online invalid
aluadisk0_4  auto:cdsdisk    –            (nfsDG)      online
aluadisk0_5  auto:cdsdisk    –            –            online
c0t0d0s2     auto:none       –            –            online invalid
root@gurkulvcs1#

 
 
Step 2: Check that shared disks used in Fencing Disks are having SCSI-3 Persistence 
 
Note : User inputs for the below commands  are in Italic format
::::::  Testing for First Disk , i.e.  aluadisk0_0   ::::::

root@gurkulvcs1#./vxfentsthdw

VERITAS vxfentsthdw version 5.0MP3 Solaris

The utility vxfentsthdw works on the two nodes of the cluster.
The utility verifies that the shared storage one intends to use is
configured to support I/O fencing.  It issues a series of vxfenadm
commands to setup SCSI-3 registrations on the disk, verifies the
registrations on the disk, and removes the registrations from the disk.

******** WARNING!!!!!!!! ********

THIS UTILITY WILL DESTROY THE DATA ON THE DISK!!

Do you still want to continue : [y/n] (default: n) y
The logfile generated for vxfensthdw is /var/VRTSvcs/log/vxfen/vxfentsthdw.log.8176

Enter the first node of the cluster:
gurkulvcs1
Enter the second node of the cluster:
gurkulvcs2

Enter the disk name to be checked for SCSI-3 PGR on node gurkulvcs1 in the format:
for dmp: /dev/vx/rdmp/cxtxdxsx
for raw: /dev/rdsk/cxtxdxsx
Make sure its the same disk as seen by nodes gurkulvcs1 and gurkulvcs2
/dev/vx/rdmp/aluadisk0_0

Enter the disk name to be checked for SCSI-3 PGR on node gurkulvcs2 in the format:
for dmp: /dev/vx/rdmp/cxtxdxsx
for raw: /dev/rdsk/cxtxdxsx
Make sure its the same disk as seen by nodes gurkulvcs1 and gurkulvcs2
/dev/vx/rdmp/aluadisk0_0
Password:
Password:

***************************************************************************

Testing gurkulvcs1 /dev/vx/rdmp/aluadisk0_0 gurkulvcs2 /dev/vx/rdmp/aluadisk0_0

Evaluate the disk before testing  …………………… No Pre-existing keys
Register keys on disk /dev/vx/rdmp/aluadisk0_0 from node gurkulvcs1 …. Passed
Verify registrations for disk /dev/vx/rdmp/aluadisk0_0 on node gurkulvcs1  Passed
Read from disk /dev/vx/rdmp/aluadisk0_0 on node gurkulvcs1 …………. Passed
Write to disk /dev/vx/rdmp/aluadisk0_0 from node gurkulvcs1 ………… Passed
Read from disk /dev/vx/rdmp/aluadisk0_0 on node gurkulvcs2 …………. Passed
Write to disk /dev/vx/rdmp/aluadisk0_0 from node gurkulvcs2 ………… Passed
Reserve disk /dev/vx/rdmp/aluadisk0_0 from node gurkulvcs1 …………. Passed
Verify reservation for disk /dev/vx/rdmp/aluadisk0_0 on node gurkulvcs1  Passed
Read from disk /dev/vx/rdmp/aluadisk0_0 on node gurkulvcs1 …………. Passed
Read from disk /dev/vx/rdmp/aluadisk0_0 on node gurkulvcs2 …………. Passed
Write to disk /dev/vx/rdmp/aluadisk0_0 from node gurkulvcs1 ………… Passed
Expect no writes for disk /dev/vx/rdmp/aluadisk0_0 on node gurkulvcs2 .. Passed
RegisterIgnoreKeys on disk /dev/vx/rdmp/aluadisk0_0 from node gurkulvcs2  Passed
Verify registrations for disk /dev/vx/rdmp/aluadisk0_0 on node gurkulvcs1  Passed
Verify registrations for disk /dev/vx/rdmp/aluadisk0_0 on node gurkulvcs2  Passed
Write to disk /dev/vx/rdmp/aluadisk0_0 from node gurkulvcs1 ………… Passed
Write to disk /dev/vx/rdmp/aluadisk0_0 from node gurkulvcs2 ………… Passed
Preempt and abort key KeyA using key KeyB on node gurkulvcs2 ……….. Passed
Test to see if I/O on node gurkulvcs1 terminated ………………….. Passed
RegisterIgnoreKeys on disk /dev/vx/rdmp/aluadisk0_0 from node gurkulvcs1  Passed
Verify registrations for disk /dev/vx/rdmp/aluadisk0_0 on node gurkulvcs1  Passed
Preempt key KeyC using key KeyB on node gurkulvcs2 ………………… Passed
Test to see if I/O on node gurkulvcs1 terminated ………………….. Passed
Verify registrations for disk /dev/vx/rdmp/aluadisk0_0 on node gurkulvcs1  Passed
Verify registrations for disk /dev/vx/rdmp/aluadisk0_0 on node gurkulvcs2  Passed
Verify reservation for disk /dev/vx/rdmp/aluadisk0_0 on node gurkulvcs1  Passed
Verify reservation for disk /dev/vx/rdmp/aluadisk0_0 on node gurkulvcs2  Passed
Remove key KeyB on node gurkulvcs2 ………………………………. Passed
Check to verify there are no keys from node gurkulvcs1 …………….. Passed
Check to verify there are no keys from node gurkulvcs2 …………….. Passed
RegisterIgnoreKeys on disk /dev/vx/rdmp/aluadisk0_0 from node gurkulvcs1  Passed
Verify registrations for disk /dev/vx/rdmp/aluadisk0_0 on node gurkulvcs1  Passed
RegisterIgnoreKeys on disk /dev/vx/rdmp/aluadisk0_0 from node gurkulvcs1  Passed
Verify registrations for disk /dev/vx/rdmp/aluadisk0_0 on node gurkulvcs1  Passed
Clear PGR on node gurkulvcs1 ……………………………………. Passed
Check to verify there are no keys from node gurkulvcs1 …………….. Passed

ALL tests on the disk /dev/vx/rdmp/aluadisk0_0 have PASSED.
The disk is now ready to be configured for I/O Fencing on node gurkulvcs1.

ALL tests on the disk /dev/vx/rdmp/aluadisk0_0 have PASSED.
The disk is now ready to be configured for I/O Fencing on node gurkulvcs2.
Feb 17 19:42:00 gurkulvcs1 root: [ID 702911 user.alert] vxfentsthdw: ALL tests on the disk /dev/vx/rdmp/aluadisk0_0 have PASSED. The disk is now ready to be configured for I/O Fencing on node gurkulvcs1.
Feb 17 19:42:00 gurkulvcs1 root: [ID 702911 user.alert] vxfentsthdw: ALL tests on the disk /dev/vx/rdmp/aluadisk0_0 have PASSED. The disk is now ready to be configured for I/O Fencing on node gurkulvcs2.
Password: Feb 17 19:41:50 gurkulvcs1 last message repeated 6 times

Password:
Password:

Removing test keys and temporary files, if any…

 
::::::  Testing for Second Disk , i.e.  aluadisk0_1  ::::::

 

root@gurkulvcs1#./vxfentsthdw

VERITAS vxfentsthdw version 5.0MP3 Solaris

The utility vxfentsthdw works on the two nodes of the cluster.
The utility verifies that the shared storage one intends to use is
configured to support I/O fencing.  It issues a series of vxfenadm
commands to setup SCSI-3 registrations on the disk, verifies the
registrations on the disk, and removes the registrations from the disk.

******** WARNING!!!!!!!! ********

THIS UTILITY WILL DESTROY THE DATA ON THE DISK!!

Do you still want to continue : [y/n] (default: n) y
The logfile generated for vxfensthdw is /var/VRTSvcs/log/vxfen/vxfentsthdw.log.13000

Enter the first node of the cluster:
gurkulvcs1
Enter the second node of the cluster:
gurkulvcs2

Enter the disk name to be checked for SCSI-3 PGR on node gurkulvcs1 in the format:
for dmp: /dev/vx/rdmp/cxtxdxsx
for raw: /dev/rdsk/cxtxdxsx
Make sure its the same disk as seen by nodes gurkulvcs1 and gurkulvcs2
/dev/vx/rdmp/aluadisk0_1

Enter the disk name to be checked for SCSI-3 PGR on node gurkulvcs2 in the format:
for dmp: /dev/vx/rdmp/cxtxdxsx
for raw: /dev/rdsk/cxtxdxsx
Make sure its the same disk as seen by nodes gurkulvcs1 and gurkulvcs2
/dev/vx/rdmp/aluadisk0_1
heartbeat to GAB (10 seconds)
Password:
Password:

***************************************************************************

Testing gurkulvcs1 /dev/vx/rdmp/aluadisk0_1 gurkulvcs2 /dev/vx/rdmp/aluadisk0_1

Evaluate the disk before testing  …………………… No Pre-existing keys
Register keys on disk /dev/vx/rdmp/aluadisk0_1 from node gurkulvcs1 …. Passed
Verify registrations for disk /dev/vx/rdmp/aluadisk0_1 on node gurkulvcs1  Passed
Read from disk /dev/vx/rdmp/aluadisk0_1 on node gurkulvcs1 …………. Passed
Write to disk /dev/vx/rdmp/aluadisk0_1 from node gurkulvcs1 ………… Passed
Read from disk /dev/vx/rdmp/aluadisk0_1 on node gurkulvcs2 …………. Passed
Write to disk /dev/vx/rdmp/aluadisk0_1 from node gurkulvcs2 ………… Passed
Reserve disk /dev/vx/rdmp/aluadisk0_1 from node gurkulvcs1 …………. Passed
Verify reservation for disk /dev/vx/rdmp/aluadisk0_1 on node gurkulvcs1  Passed
Read from disk /dev/vx/rdmp/aluadisk0_1 on node gurkulvcs1 …………. Passed
Read from disk /dev/vx/rdmp/aluadisk0_1 on node gurkulvcs2 …………. Passed
Write to disk /dev/vx/rdmp/aluadisk0_1 from node gurkulvcs1 ………… Passed
Expect no writes for disk /dev/vx/rdmp/aluadisk0_1 on node gurkulvcs2 .. Passed
RegisterIgnoreKeys on disk /dev/vx/rdmp/aluadisk0_1 from node gurkulvcs2  Passed
Verify registrations for disk /dev/vx/rdmp/aluadisk0_1 on node gurkulvcs1  Passed
Verify registrations for disk /dev/vx/rdmp/aluadisk0_1 on node gurkulvcs2  Passed
Write to disk /dev/vx/rdmp/aluadisk0_1 from node gurkulvcs1 ………… Passed
Write to disk /dev/vx/rdmp/aluadisk0_1 from node gurkulvcs2 ………… Passed
Preempt and abort key KeyA using key KeyB on node gurkulvcs2 ……….. Passed
Test to see if I/O on node gurkulvcs1 terminated ………………….. Passed
RegisterIgnoreKeys on disk /dev/vx/rdmp/aluadisk0_1 from node gurkulvcs1  Passed
Verify registrations for disk /dev/vx/rdmp/aluadisk0_1 on node gurkulvcs1  Passed
Preempt key KeyC using key KeyB on node gurkulvcs2 ………………… Passed
Test to see if I/O on node gurkulvcs1 terminated ………………….. Passed
Verify registrations for disk /dev/vx/rdmp/aluadisk0_1 on node gurkulvcs1  Passed
Verify registrations for disk /dev/vx/rdmp/aluadisk0_1 on node gurkulvcs2  Passed
Verify reservation for disk /dev/vx/rdmp/aluadisk0_1 on node gurkulvcs1  Passed
Verify reservation for disk /dev/vx/rdmp/aluadisk0_1 on node gurkulvcs2  Passed
Remove key KeyB on node gurkulvcs2 ………………………………. Passed
Check to verify there are no keys from node gurkulvcs1 …………….. Passed
Check to verify there are no keys from node gurkulvcs2 …………….. Passed
RegisterIgnoreKeys on disk /dev/vx/rdmp/aluadisk0_1 from node gurkulvcs1  Passed
Verify registrations for disk /dev/vx/rdmp/aluadisk0_1 on node gurkulvcs1  Passed
RegisterIgnoreKeys on disk /dev/vx/rdmp/aluadisk0_1 from node gurkulvcs1  Passed
Verify registrations for disk /dev/vx/rdmp/aluadisk0_1 on node gurkulvcs1  Passed
Clear PGR on node gurkulvcs1 ……………………………………. Passed
Check to verify there are no keys from node gurkulvcs1 …………….. Passed

ALL tests on the disk /dev/vx/rdmp/aluadisk0_1 have PASSED.
The disk is now ready to be configured for I/O Fencing on node gurkulvcs1. ALL tests on the disk /dev/vx/rdmp/aluadisk0_1 have PASSED. The disk is now ready to be configured for I/O Fencing on node gurkulvcs2.vxfentsthdw:ALL tests on the disk /dev/vx/rdmp/aluadisk0_1 have PASSED. The disk is now ready to be configured for I/O Fencing on node gurkulvcs1.

vxfentsthdw: ALL tests on the disk /dev/vx/rdmp/aluadisk0_1 have PASSED. The disk is now ready to be configured for I/O Fencing on node gurkulvcs2. Password: Password: Password:
Removing test keys and temporary files, if any…
root@gurkulvcs1#

 
::::::  Testing for Third Disk , i.e.  aluadisk0_2   ::::::

 

 

 

root@gurkulvcs1#./vxfentsthdw

VERITAS vxfentsthdw version 5.0MP3 Solaris

The utility vxfentsthdw works on the two nodes of the cluster.
The utility verifies that the shared storage one intends to use is
configured to support I/O fencing.  It issues a series of vxfenadm
commands to setup SCSI-3 registrations on the disk, verifies the
registrations on the disk, and removes the registrations from the disk.

******** WARNING!!!!!!!! ********

THIS UTILITY WILL DESTROY THE DATA ON THE DISK!!

Do you still want to continue : [y/n] (default: n) y
The logfile generated for vxfensthdw is /var/VRTSvcs/log/vxfen/vxfentsthdw.log.14997

Enter the first node of the cluster:
gurkulvcs1
Enter the second node of the cluster:
gurkulvcs2

Enter the disk name to be checked for SCSI-3 PGR on node gurkulvcs1 in the format:
for dmp: /dev/vx/rdmp/cxtxdxsx
for raw: /dev/rdsk/cxtxdxsx
Make sure its the same disk as seen by nodes gurkulvcs1 and gurkulvcs2
/dev/vx/rdmp/aluadisk0_2

Enter the disk name to be checked for SCSI-3 PGR on node gurkulvcs2 in the format:
for dmp: /dev/vx/rdmp/cxtxdxsx
for raw: /dev/rdsk/cxtxdxsx
Make sure its the same disk as seen by nodes gurkulvcs1 and gurkulvcs2
/dev/vx/rdmp/aluadisk0_2
Password:

***************************************************************************

Testing gurkulvcs1 /dev/vx/rdmp/aluadisk0_2 gurkulvcs2 /dev/vx/rdmp/aluadisk0_2

Evaluate the disk before testing  …………………… No Pre-existing keys
Register keys on disk /dev/vx/rdmp/aluadisk0_2 from node gurkulvcs1 …. Passed
Verify registrations for disk /dev/vx/rdmp/aluadisk0_2 on node gurkulvcs1  Passed
Read from disk /dev/vx/rdmp/aluadisk0_2 on node gurkulvcs1 …………. Passed
Write to disk /dev/vx/rdmp/aluadisk0_2 from node gurkulvcs1 ………… Passed
Read from disk /dev/vx/rdmp/aluadisk0_2 on node gurkulvcs2 …………. Passed
Write to disk /dev/vx/rdmp/aluadisk0_2 from node gurkulvcs2 ………… Passed
Reserve disk /dev/vx/rdmp/aluadisk0_2 from node gurkulvcs1 …………. Passed
Verify reservation for disk /dev/vx/rdmp/aluadisk0_2 on node gurkulvcs1  Passed
Read from disk /dev/vx/rdmp/aluadisk0_2 on node gurkulvcs1 …………. Passed
Read from disk /dev/vx/rdmp/aluadisk0_2 on node gurkulvcs2 …………. Passed
Write to disk /dev/vx/rdmp/aluadisk0_2 from node gurkulvcs1 ………… Passed
Expect no writes for disk /dev/vx/rdmp/aluadisk0_2 on node gurkulvcs2 .. Passed
RegisterIgnoreKeys on disk /dev/vx/rdmp/aluadisk0_2 from node gurkulvcs2  Passed
Verify registrations for disk /dev/vx/rdmp/aluadisk0_2 on node gurkulvcs1  Passed
Verify registrations for disk /dev/vx/rdmp/aluadisk0_2 on node gurkulvcs2  Passed
Write to disk /dev/vx/rdmp/aluadisk0_2 from node gurkulvcs1 ………… Passed
Write to disk /dev/vx/rdmp/aluadisk0_2 from node gurkulvcs2 ………… Passed
Preempt and abort key KeyA using key KeyB on node gurkulvcs2 ……….. Passed
Test to see if I/O on node gurkulvcs1 terminated ………………….. Passed
RegisterIgnoreKeys on disk /dev/vx/rdmp/aluadisk0_2 from node gurkulvcs1  Passed
Verify registrations for disk /dev/vx/rdmp/aluadisk0_2 on node gurkulvcs1  Passed
Preempt key KeyC using key KeyB on node gurkulvcs2 ………………… Passed
Test to see if I/O on node gurkulvcs1 terminated ………………….. Passed
Verify registrations for disk /dev/vx/rdmp/aluadisk0_2 on node gurkulvcs1  Passed
Verify registrations for disk /dev/vx/rdmp/aluadisk0_2 on node gurkulvcs2  Passed
Verify reservation for disk /dev/vx/rdmp/aluadisk0_2 on node gurkulvcs1  Passed
Verify reservation for disk /dev/vx/rdmp/aluadisk0_2 on node gurkulvcs2  Passed
Remove key KeyB on node gurkulvcs2 ………………………………. Passed
Check to verify there are no keys from node gurkulvcs1 …………….. Passed
Check to verify there are no keys from node gurkulvcs2 …………….. Passed
RegisterIgnoreKeys on disk /dev/vx/rdmp/aluadisk0_2 from node gurkulvcs1  Passed
Verify registrations for disk /dev/vx/rdmp/aluadisk0_2 on node gurkulvcs1  Passed
RegisterIgnoreKeys on disk /dev/vx/rdmp/aluadisk0_2 from node gurkulvcs1  Passed
Verify registrations for disk /dev/vx/rdmp/aluadisk0_2 on node gurkulvcs1  Passed
Clear PGR on node gurkulvcs1 ……………………………………. Passed
Check to verify there are no keys from node gurkulvcs1 …………….. Passed

ALL tests on the disk /dev/vx/rdmp/aluadisk0_2 have PASSED.
The disk is now ready to be configured for I/O Fencing on node gurkulvcs1.

ALL tests on the disk /dev/vx/rdmp/aluadisk0_2 have PASSED.
The disk is now ready to be configured for I/O Fencing on node gurkulvcs2.
vxfentsthdw: ALL tests on the disk /dev/vx/rdmp/aluadisk0_2 have PASSED. The disk is now ready to be configured for I/O Fencing on node gurkulvcs1.
vxfentsthdw: ALL tests on the disk /dev/vx/rdmp/aluadisk0_2 have PASSED. The disk is now ready to be configured for I/O Fencing on node gurkulvcs2.
Password:
Password:
Password:
Removing test keys and temporary files, if any…
root@gurkulvcs1#

 
######################################################
 
Step 3: Configuring IO Fencing on DMP ISCSI disk ( This step is little different for EMC/CX disks)
 
Step 3a:  Creating vxfenmode file, On Both Nodes
# cp /etc/vxfen.d/vxfenmode_scsi3_dmp  /etc/vxfenmode
root@gurkulvcs2 # cat /etc/vxfenmode

#
# vxfen_mode determines in what mode VCS I/O Fencing should work.
#
# available options:
# scsi3      – use scsi3 persistent reservation disks
# customized – use script based customized fencing
# disabled   – run the driver but don’t do any actual fencing
#
vxfen_mode=scsi3

#
# scsi3_disk_policy determines the way in which I/O Fencing communicates with
# the coordination disks.
#
# available options:
# dmp – use dynamic multipathing
# raw – connect to disks using the native interface
#
scsi3_disk_policy=dmp

root@gurkulvcs2#

 
Step 3b : Create /etc/vxfentab file with Fencing Disks, on both Nodes
 

root@gurkulvcs2#cat /etc/vxfentab
/dev/vx/rdmp/aluadisk0_0s2
/dev/vx/rdmp/aluadisk0_1s2
/dev/vx/rdmp/aluadisk0_2s2
root@gurkulvcs2#

 
 
Step 3c: Create /etc/vxfendg file with Fencing Disks, on both nodes

root@gurkulvcs2#cat /etc/vxfendg
vxfenceDG
root@gurkulvcs2#

 
 
Step 5 : Start the Vxfen service , on both nodes
 

root@gurkulvcs2#/etc/init.d/vxfen start
Starting vxfen..
Checking for /etc/vxfendg
Starting vxfen.. Done

 

root@gurkulvcs1#/etc/init.d/vxfen start
Starting vxfen..
Checking for /etc/vxfendg
Starting vxfen.. Done

 
Step 6: Verify the Vxfen  opened port with GAB, on both nodes
 

root@gurkulvcs2#gabconfig -a
GAB Port Memberships
===============================================================
Port a gen   31fc02 membership 01
Port b gen   31fc07 membership 01
Port h gen   31fc05 membership 01

 

root@gurkulvcs1#gabconfig -a
GAB Port Memberships
===============================================================
Port a gen   31fc02 membership 01
Port b gen   31fc07 membership 01
Port h gen   31fc05 membership 01

 

 
Step 7 : Verify the IOFENCING Cluster Information
 
 

root@gurkulvcs2#vxfenadm -d

I/O Fencing Cluster Information:
================================

Fencing Protocol Version: 201
Fencing Mode: SCSI3
Fencing SCSI3 Disk Policy: dmp
Cluster Members:

0 (gurkulvcs1)
* 1 (gurkulvcs2)

RFSM State Information:
node   0 in state  8 (running)
node   1 in state  8 (running)

root@gurkulvcs2#

 
 

root@gurkulvcs1#vxfenadm -d

I/O Fencing Cluster Information:
================================

Fencing Protocol Version: 201
Fencing Mode: SCSI3
Fencing SCSI3 Disk Policy: dmp
Cluster Members:

* 0 (gurkulvcs1)
1 (gurkulvcs2)

RFSM State Information:
node   0 in state  8 (running)
node   1 in state  8 (running)

root@gurkulvcs1#

 

Video to Explain I/O Fencing Operation ( Refresh Screen, if you don’t see the Video)

 

 
 
 


Additional Reference : My NFS Cluster Configuration File:

 

root@gurkulvcs1#cat “/etc/VRTSvcs/conf/config/main.cf”

include “types.cf”

cluster gurkulcluster (
        UserNames = { admin = gJKcJEjGKfKKiSKeJH }
        Administrators = { admin }
        )

system gurkulvcs1 (
        )

system gurkulvcs2 (
        )

group hanfs (
        SystemList = { gurkulvcs1 = 1, gurkulvcs2 = 2 }
        AutoStartList = { gurkulvcs1 }
        )

        DiskGroup nfsDG (
                DiskGroup = nfsDG
                )

        IP nfsIP (
                Device = e1000g0
                Address = “192.168.1.99”
                NetMask = “255.255.255.0”
                )

        Mount nfsMNT (
                MountPoint = “/source”
                BlockDevice = “/dev/vx/dsk/nfsDG/nfsVOL”
                FSType = ufs
                MountOpt = rw
                FsckOpt = “-y”
                )

        NFS vcsNFS (
                Nservers = 24
                )

        NIC nfsNIC (
                Device = e1000g0
                NetworkType = ether
                )

        Share vcsSHARE (
                PathName = “/source”
                )

        Volume nfsVOL (
                Volume = nfsVOL
                DiskGroup = nfsDG
                )

        nfsIP requires nfsNIC
        nfsIP requires vcsSHARE
        nfsMNT requires nfsVOL
        nfsVOL requires nfsDG
        vcsSHARE requires nfsMNT
        vcsSHARE requires vcsNFS

        // resource dependency tree
        //
        //      group hanfs
        //      {
        //      IP nfsIP
        //          {
        //          NIC nfsNIC
        //          Share vcsSHARE
        //              {
        //              Mount nfsMNT
        //                  {
        //                  Volume nfsVOL
        //                      {
        //                      DiskGroup nfsDG
        //                      }
        //                  }
        //              NFS vcsNFS
        //              }
        //          }
        //      }

root@gurkulvcs1#

 
 
Ramdev

Ramdev

I have started unixadminschool.com ( aka gurkulindia.com) in 2009 as my own personal reference blog, and later sometime i have realized that my leanings might be helpful for other unixadmins if I manage my knowledge-base in more user friendly format. And the result is today's' unixadminschool.com. You can connect me at - https://www.linkedin.com/in/unixadminschool/

21 Responses

  1. sajeer rehman says:

    Thanks Ram for posting a clear cut doc about I/O fencing.I found this really useful.

  2. Ramdev Ramdev says:

    Sajeer, Thanks for the feedback.

  3. Sahil says:

    Thanks  Ram….Still i would like to understand troubleshooting of i/o fencing when keys got stuck in coordinator disk 

    • Ramdev Ramdev says:

      @Sahil, I am still not done with i/o fencing. I am working on more simulations for the troubleshooting part,I will post them soon.

  4. friv says:

    Interesting article about i/o fencing. I am still a newbie in the programming and I need to learn a lot of stuff, so i will bookmark your page to come back here. Thanks a lot!

  5. Srini says:

    Ram, Its really interesting and useful topics. I have been going through your posts / videos and trying to build two node veritas cluster on linux. I have ESX5.0 highend server and created two VMs and installed Linux 5.8 but need some help on creating shared disk. do i need to take RDMs or vmdk disk and any specific setting for shared storage configuration for VMs. Appreciate your help.

    • Ramdev Ramdev says:

      Hi Srini, if you want to test complete cluster functionality I would recommend to go with openfiler for shared storage, because it gives you chance to explore the disk failure scenarios. Anyway, below are the steps ( extracted from old configuration guide) for your reference.

      Perform the following steps to configureVCScluster within a single ESX host with all the virtual machines turned off:
      1 Perform the following steps first on one node of the VCS cluster:
      ■ On vSphere Client, select Edit Settings for the first virtual machine in the VCS cluster.
      ■ Click Add, select HardDisk, and click Next.
      ■ Select Create a new virtualdisk and then click Next.
      ■ Select a Datastore and click Next.
      ■ Select Newvirtualdevicenode [for example: SCSI (1:0)] and then click Next followed by Finish. Do not use SCSI 0.
      ■ After successfully adding the disk, open the VirtualMachine Properties
      and set the SCSI Bus Sharing to Virtual.
      2 Perform the following steps on each of the other nodes in the VCS cluster:
      ■ Select a virtual machine in the VCS cluster and select Edit Settings.
      ■ Click Add, select HardDisk and then click Next.
      ■ Select Use an existing virtualdisk and click Next.
      ■ Browse to the location of the disk configured for the first node in this procedure.
      ■ Select the same virtual device node [for example: SCSI (1:0)] set in the previous step for the first virtual machine and then click Next followed by Finish.
      ■ Set SCSIBusSharing to Virtual and click Ok.
      Note:You will not be able to power on all the virtual machines using the shared disk if SCSIBusSharing is not set to Virtual.

  6. shiv says:

    Hi ram 

    can you provide me the full step to configure the  vxfendDG disk group creation using the 3 shared disks steps 

    • Ramdev Ramdev says:

      Hi Shiv, here is the information about vxfendg creation

      initialize three disks with below syntax
      # vxdisksetup -i device_name format=cdsdis

      create vxfeng with below syntax

      # vxdg -o coordinator=on init vxfendg disk1
      # vxdg -g vxfendg adddisk disk2
      # vxdg -g vxfendg adddisk disk3

      Once created deport the disk group

      # vxdg deport vxfendg

      import temporarily so that it will go down during reboot

      # vxdg -t import vxfendg

  7. Hi Ram,
    Thanks for sharing this article.
    Anyway is it possible to build 2node VCS setup with 1 openfiler storage on VMWARE 9 or VIRTUalbox ?

    Please respond ..

  8. Musab says:

    Hi Ram,
    can u pl help in VCS on linux… :>

  9. Musab says:

    Ram im layman in VCS, i want to develop it on LINUX, can u help me or guide me to any of ur uploads which i can use to work on it 
    Thanks

  10. Umesh Kumar says:

    Hi Ram,

    I also have a two node cluster setup on Virtual Machine. And I am using openfiler for shared storage.
    Could you please let me know how we can create co-ordinator disks and set the SCSI 3 PR on these iscsi luns.

    Thanks & Regards

    Umesh Kumar

  11. somasekhar says:

    Hi Ram,
    I have configured two node cluster on solaris VM. And i have installed Openfiler for shared storage. I am unable to get the menu in webpage. Please help.

    Thanks,
    Somasekhar.

  12. chandru says:

    Hi Ram, How to enable the heartbeat in VCS?

  13. Bala says:

    Nice Post…

  14. kalyan says:

    Hi ,
    i am using openfiler for shared storage to a 2 node cluster. How can i get fencing disk with scsi3-PR enable. Please guide ho to cretae scsi3-pr enable disks for i/o fencing

  1. September 18, 2015

    […] Read – VCS for Beginners – I/O Fencing In action [ Video ] […]

What is in your mind, about this post ? Leave a Reply

Close
  Our next learning article is ready, subscribe it in your email

What is your Learning Goal for Next Six Months ? Talk to us