In our previous post ( RHCS : GFS2 Filesystem configuration in Clustered Environment – Part1 ) we have discussed about the introduction and initial configuration of GFS2. And this post we will go through the administration tasks related to GFS2 filesystems. And this post doesn’t cover entire admin tasks related to GFS2 but covers major day-to-day operations related to GFS2 in Redhat Clustered environment.
Below are the Tasks I will be discussing in this post:
1. GFS2 File System Expansion
2. GFS2 Journal Addition
3. Suspending and Resuming Writing Activities on GFS2 filesystems
4. Repairing GFS2 filesystem
Step 1: Check The Current Filesystem Size, LV size and PV size
[root@gurkulclu-node1 test-gfs-fs-data]# df -h /clvgfs
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/vgCLUSTER-clvolume1
500M 259M 242M 52% /clvgfs
[root@gurkulclu-node1 test-gfs-fs-data]# vgs
VG #PV #LV #SN Attr VSize VFree
vgCLUSTER 1 1 0 wz–nc 780.00m 280.00m
vg_gurkulclunode1 1 2 0 wz–n- 7.51g 0
[root@gurkulclu-node1 test-gfs-fs-data]# lvs
LV VG Attr LSize Pool Origin Data% Move Log Cpy%Sync Convert
clvolume1 vgCLUSTER -wi-ao— 500.00m
lv_root vg_gurkulclunode1 -wi-ao— 3.66g
lv_swap vg_gurkulclunode1 -wi-ao— 3.84g
Step 2: Extend the LV by 200MB
[root@gurkulclu-node1 test-gfs-fs-data]# lvextend -L +200M /dev/mapper/vgCLUSTER-clvolume1
Extending logical volume clvolume1 to 700.00 MiB
Logical volume clvolume1 successfully resized
Step 3: Extend the GFS2 filesystem using the gfs2_grow command.
[root@gurkulclu-node1 test-gfs-fs-data]# gfs2_grow /clvgfs
Error: The device has grown by less than one Resource Group (RG).
The device grew by 200MB. One RG is 249MB for this file system.
gfs2_grow complete.
Note: Remembered the point we discussed duringthe GFS2 filesystem creation, the command output was shown as Resource Groups = 2,that means total 500M volume divided into two RGs of each ~250M each. We should minimum allocate atleast the size of one RG, whenever we want to grow the filesystem using gfs2_grow.
[root@gurkulclu-node1 test-gfs-fs-data]# df -h /clvgfs
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/vgCLUSTER-clvolume1
500M 259M 242M 52% /clvgfs
Since gfs2_grow failes, extend the LV by 50MB more, and the try the gfs2_grow again
[root@gurkulclu-node1 test-gfs-fs-data]# lvextend -L +50M /dev/mapper/vgCLUSTER-clvolume1
Rounding size to boundary between physical extents: 52.00 MiB
Extending logical volume clvolume1 to 752.00 MiB
Logical volume clvolume1 successfully resized
[root@gurkulclu-node1 test-gfs-fs-data]# gfs2_grow /clvgfs
FS: Mount Point: /clvgfs
FS: Device: /dev/dm-6
FS: Size: 127997 (0x1f3fd)
FS: RG size: 63988 (0xf9f4)
DEV: Size: 192512 (0x2f000)
The file system grew by 252MB.
gfs2_grow complete.
[root@gurkulclu-node1 test-gfs-fs-data]#
Now we see that filesystem extendedto 750MB.
Step4 : on, gurkulclu-node2 , Check the filesysem size.
[root@gurkulclu-node2 test-gfs-fs-data]# df -h /clvgfs
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/vgCLUSTER-clvolume1
750M 259M 492M 35% /clvgfs
[root@gurkulclu-node2 test-gfs-fs-data]#
We can confirm that new filesystem size is visible.
Step 1: On gurkulclu-node1, List the currently available journals
[root@gurkulclu-node1 test-gfs-fs-data]# gfs2_tool journals /clvgfs
journal1 – 128MB
journal0 – 128MB
2 journal(s) found.
Step2: on gurkulclu-node1, Add one more jounal to the filesystem
[root@gurkulclu-node1 test-gfs-fs-data]# gfs2_jadd -j 1 /dev/mapper/vgCLUSTER-clvolume1
Filesystem: /clvgfs
Old Journals 2
New Journals 3
[root@gurkulclu-node1 test-gfs-fs-data]#
Step3: on gurkulclu-node2, check the new journal is visible
[root@gurkulclu-node2 test-gfs-fs-data]# gfs2_tool journals /clvgfs
journal2 – 128MB
journal1 – 128MB
journal0 – 128MB
3 journal(s) found.
[root@gurkulclu-node2 test-gfs-fs-data]#
Ordinarily, GFS2 writes only metadata to it’s journal. File contents are subsequently written to disk by the kernel’s periodic sync that flushes filesystem buffers. An fsync() call on a file causes the file’s data to be written to disk immediately.
Advantages :
– Applications that rely on fsync() to sync file data may see improved performance by using data journaling.
– Data journaling can result in a reduced fsync() time for very small files because the file data is written to the journal in addition to the metadata.
Limitations:
– The data jourtnaling advantages rapidly reduces as the file size increases. Writing to medium and larger files will be much slower with data journaling turned on.
How to enable Data Journaling?
[root@gurkulclu-node1 clvgfs]# chattr +j test-gfs-fs-data
[root@gurkulclu-node1 clvgfs]# lsattr
———j—– ./test-gfs-fs-data
[root@gurkulclu-node1 clvgfs]# lsattr test-gfs-fs-data
————— test-gfs-fs-data/c
————— test-gfs-fs-data/b
Once data journal enabled on the directory all the new files will have the data journal automatically enabled.
[root@gurkulclu-node1 clvgfs]# touch test-gfs-fs-data/d
[root@gurkulclu-node1 clvgfs]# lsattr test-gfs-fs-data
————— test-gfs-fs-data/c
————— test-gfs-fs-data/b
———j—– test-gfs-fs-data/d
How to Disable Data Journaling?
[root@gurkulclu-node1 clvgfs]# chattr -j test-gfs-fs-data/d
[root@gurkulclu-node1 clvgfs]# lsattr test-gfs-fs-data
————— test-gfs-fs-data/c
————— test-gfs-fs-data/b
————— test-gfs-fs-data/d
[root@gurkulclu-node1 clvgfs]#
We can suspend write activity to a file system by using the dmsetup suspend command. Suspending write activity allows hardware-based device snapshots to be used to capture the file system in a consistent state. The dmsetup resume command ends the suspension.
Step 1: Suspend the writing activity in the filesystem
[root@gurkulclu-node1 test-gfs-fs-data]# dmsetup suspend /dev/mapper/vgCLUSTER-clvolume1
Step 2: Check the reading activity
[root@gurkulclu-node1 test-gfs-fs-data]# ls -l
total 24
-rw-r–r–. 1 root root 112 Sep 21 15:14 b
-rw-r–r–. 1 root root 0 Sep 21 15:07 c
-rw-r–r–. 1 root root 0 Sep 21 17:30 d
[root@gurkulclu-node1 test-gfs-fs-data]#
[root@gurkulclu-node1 te-fs-data]#
[root@gurkulclu-node1 test-gfs-fs-data]# cd ..
[root@gurkulclu-node1 clvgfs]# ls -l
total 16
-rw-r–r–. 1 root root 0 Sep 21 17:29 a
drwxr-xr-x. 2 root root 3864 Sep 21 17:19 test-gfs-fs-data
Step 3: Trying to delete a file, but the session hung
[root@gurkulclu-node1 clvgfs]# rm a
rm: remove regular empty file `a’? y
^C^C^C^C^C^C^C
^C^C^C^C
^C^C^C
Step 4: Resume the Writing activity on the filesystems
[root@gurkulclu-node1 test-gfs-fs-data]# dmsetup resume /dev/mapper/vgCLUSTER-clvolume1
Then the previous “rm a” command on session command completed.
Few points to remember:
1. We should not enable FSCK during boot time in /etc/fstab options
2. The fsck.gfs2 command must be run only on a file system that is unmounted from all nodes.
3. Pressing Ctrl+C while running the fsck.gfs2 interrupts processing and displays a prompt asking whether you would like to abort the command, skip the rest of the current pass, or continue processing.
4. You can increase the level of verbosity by using the -v flag. Adding a second -v flag increases the level again.
5. You can decrease the level of verbosity by using the -q flag. Adding a second -q flag decreases the level again.
6. The -n option opens a file system as read-only and answers no to any queries automatically. The option provides a way of trying the command to reveal errors without actually allowing the fsck.gfs2 command to take effect.
[root@gurkulclu-node1 ~]# fsck.gfs2 -y /dev/mapper/vgCLUSTER-clvolume1
Initializing fsck
Validating Resource Group index.
Level 1 rgrp check: Checking if all rgrp and rindex values are good.
(level 1 passed)
Starting pass1
Pass1 complete
Starting pass1b
Pass1b complete
Starting pass1c
Pass1c complete
Starting pass2
Pass2 complete
Starting pass3
Pass3 complete
Starting pass4
Pass4 complete
Starting pass5
Block 191984 (0x2edf0) bitmap says 0 (Free) but FSCK saw 1 (Data)
Metadata type is 1 (data)
Fixed.
RG #127997 (0x1f3fd) free count inconsistent: is 63984 should be 63983
Resource group counts updated
Pass5 complete
The statfs file is wrong:
Current statfs values:
blocks: 191956 (0x2edd4)
free: 92656 (0x169f0)
dinodes: 24 (0x18)
Calculated statfs values:
blocks: 191956 (0x2edd4)
free: 92655 (0x169ef)
dinodes: 24 (0x18)
The statfs file was fixed.
Writing changes to disk
gfs2_fsck complete
[root@gurkulclu-node1 ~]#
Comment On Blog