Linux Admin Reference – Understand Device Mapper and DM-Multipath – Red Hat Enterprise Linux

Device-mapper is a very important component of Linux 2.6 kernel. It is used for many critical storage related applications, such as LVM2, Linux native multipath tool device-mapper-multipath, device-mapper software RAID, etc. A solid understanding of device-mapper helps system administrators to investigate various kinds of issues with these applications.

In the Linux kernel, the device-mapper is a generic framework to map one block device into another. It is not just for LVM2. By using device-mapper, the kernel provides general services to dm-multipath, LVM2 and EVMS, device-mapper software RAIDs, dm-crypt disk encryption and offers additional features such as file system snapshots.

In device-mapper architecture there are three important concepts to be familiarized

1. Mapped Device

2. Mapping Table

3. Target Device

Mapped Device

Mapped device is a logical device provided by device-mapper driver. It provides an interface to operate on. The logical volumes in LVM2 and pseudo disks in dm-multipath are good examples of mapped devices.

Mapping Table

Mapping table represents a mappi

ng from a mapped device to target devices. One table can be seen as an group of variables including mapped device’s starting address, length and target device’s physical device, starting address, length. The unit used here is a sector (512 bytes). The command “dmsetup create” will create such a table with parameters provided.

Target Device

Target device is a modularized plugin. The device-mapper filters and redirects I/O requests with it. some good examples for target devices

  • linear for LVM2
  • mirror for RAID
  • stripped for LVM2
  • snapshot for LVM2
  • multipath for dm-multipath

How application deals with device mapper

Device-mapper works by processing data passed in from a virtual block device, that it itself provides, and then passing it on to another block device.

Applications (like dm_multipath and LVM2) that want to create new mapped devices talk to the Device-mapper via the libdevmapper.so shared library, which in turn issues ioctls to the /dev/mapper/control device node.

What is DM-Multipath

Device-Mapper Multipath (DM-Multipath) is a Linux native multipath tool, which allows you to configure multiple I/O paths between server nodes and storage arrays into a single device. These I/O paths are physical SAN connections that can include separate cables, switches, and controllers. Multipathing aggregates the I/O paths, creating a new device that consists of the aggregated paths.

Create partitions and file system on DM-Multipath devices

The DM-Multipath devices will be created as /dev/mapper/mpathN, where N is the multipath group number. Use command fdisk  to create partitions on /dev/mapper/mpathN:

# fdisk /dev/mapper/mpath0

Command (m for help): n   
Command action
   e   extended
   p   primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-1017, default 1):
Using default value 1
Last cylinder or +size or +sizeM or +sizeK (1-1017, default 1017):
Using default value 1017

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.

Important Notes to use DM-Multipath devices in your storage configurations:

The DM-Multipath tool uses three different sets of file names:

NEVER use /dev/dm-N devices, as they are only intended to be used by the DM-Multipath tool.

NEVER use /dev/mpath/mpathNdevices, because when multipath devices are mounted at boot time , the UDEV subsystem may not create the device nodes soon enough.

ALWAYS use /dev/mapper/mpathN devices, as they are persistent and they are automatically created by device-mapper early in the boot process. Therefore these are the device names that should be used to access the multipathed devices. But in a RAC (Real Application Clustrer) configuration, although /dev/mapper/mpathN names may be persistent across reboots on a single machine, there is no guarantee that other cluster nodes will use the same name for this disk. If this is desired, then we should use the UDEV facility to get cluster-wide persistent names.

Register multipath partitions in /dev/mapper:

# kpartx -a /dev/mapper/mpath0

List all partitions on this device:

# kpartx -l /dev/mapper/mpath0
mpath0p1 : 0 2295308 /dev/mapper/mpath0 61

Create a file system on partitions:

# mkfs -t ext3 /dev/mapper/mpath0p1
mke2fs 1.35 (28-Feb-2004)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
143712 inodes, 286913 blocks
14345 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=297795584
9 block groups
32768 blocks per group, 32768 fragments per group
15968 inodes per group
Superblock backups stored on blocks:
        32768, 98304, 163840, 229376

Writing inode tables: done
Creating journal (8192 blocks): done
Writing superblocks and filesystem accounting information: done

This filesystem will be automatically checked every 27 mounts or
180 days, whichever comes first.  Use tune2fs -c or -i to override.

Mount the partition on the mount point:

# mkdir /datafile
# mount /dev/mapper/mpath0p1 /datafile

 

Additional References –

Dynamic addition/removal of San Storage – in Redhat Linux

Linux Admin Reference – Device Mapper Multipath ( DM-Multipath) Configuration – Red Hat Enterprise Linux 6

Ramdev

Ramdev

I have started unixadminschool.com ( aka gurkulindia.com) in 2009 as my own personal reference blog, and later sometime i have realized that my leanings might be helpful for other unixadmins if I manage my knowledge-base in more user friendly format. And the result is today's' unixadminschool.com. You can connect me at - https://www.linkedin.com/in/unixadminschool/

15 Responses

  1. pvprasanna says:

    i want to join with u

  2. Jerri says:

    I am actually delighted to glance at this webpage posts
    which contains plenty of helpful data, thanks for providing these
    kinds of information.

    • Ramdev Ramdev says:

      Hi Jerri, Thanks for dropping the comment

      • Debu says:

        could you please explain, which raid is better software vs hardware raid and in real time companies are using software raid or hardware raid, with a real time example?(Disk replacement procedure in real time(solaris and linux)) and if raid then what is the need of hot spare pools ?
        I would really appreciate? As I have a lot of confusion on these topics??????

  3. Ramdev Ramdev says:

    Hi Debu, Please check the link for quick reference on RAID – http://unixadminschool.com/blog/2013/08/quick-reference-raid-redundant-array-of-independent-disks/ . Software. Hardware RAID simple to maintain but it has additional hardware .cost, most of the linux servers running on HP hardware normally use the Hardware RAID models. Where as Software RAID doesn’t require any additional hardware and completely management by software based volume managers like ( LVM in linux, SVM in solaris, VxVM …etc).

    below are the links that explains for disk replacement procedure in solaris ( configured with software raid using SVM ) – http://unixadminschool.com/blog/2011/08/step-by-step-procedure-for-replacement-of-a-failed-disk-in-solaris-svm-in-solaris-volume-manager/

    below is the link to configure software RAID in oracle linux — http://unixadminschool.com/blog/2011/05/oracle-enterprise-linux-configure-raid/

    for hardware RAID, the setup is completely from System BIOS level.

  4. Debu says:

    Thanks ……..Good work……….
    please update some more realtime issues on linux ……….

  5. duraiarun says:

    hi Ram,

    how to maintain same device tree in two clusternodes in rhel 6.4

  6. Olivier says:

    Dude, thanks for the info, especially ALWAYS use /dev/mapper/mpathN devices !

  7. Gerald J says:

    Hey Ramdev,

    Great article.

    I have a system using multi-path. I note that the logical drives each get a /dev/md-* device node and I can see how they are related using the command:

    multipath -l

    But then each partition on each logical drive also gets a /dev/md-*. To see which partitions are associated with which /dev/md-* device nodes you can use ls -l /dev/mpath and follow the symlinks.

    So if you get an error message with a reference to /dev/md-xx and no indication of which partition that is you will be able to figure it out.

    Thanks,

    Gerald J.

  8. TM Sundaram says:

    Ram,
    Very useful article. It contains valuable information’s.

  1. April 9, 2014

    […] are some general links: http://unixadminschool.com/blog/2011…-dm-multipath/ […]

  2. May 26, 2014

    […] :    You Can refer  Understanding Linuce Device Multipath  and   Device Multipath Configuration in RHEL, for Basic Multipath […]

  3. June 18, 2014

    […] (from http://unixadminschool.com/) […]

  4. September 16, 2015

    […] Read– Device Mapper and DM Reference Guide […]

Leave a Reply to Debu Cancel reply

Close
  Our next learning article is ready, subscribe it in your email

What is your Learning Goal for Next Six Months ? Talk to us