Solaris host level SAN migration from Clariion to VMAX – Hands on Lab

During the past two to three years,  many organisations started migrating their Server storage from Old legacy SAN devices ( e.g. EMC Clariion )  to new  powerful SAN storage ( e.g. EMC Symetrix VMAX) because of the low performance and maintenance costs involved with legacy storage.  Depending on the budget allocated for the Storage Migration projects, some organisations  prefer the migration by “direct storage level data replication  using expensive migration tools”,  while the other companies ( who are with limited budget) prefer to do the migrations by host level data replication. In the later method,  the success rate of the migration project directly depends on the skill level and expertise of the  unix administrator who is implementing the migration project.  I believe this hands-on post will give some idea for the solaris admins whoever responsible for the Storage migrations. Before going to this post you might want to refer this post for the pre-planning tasks.

 

 

 

 

The Lab Setup that I have used for this post  was explained in the above diagram.  In simple words, we have a  Solaris server (  with Sybase DB), with four Filesystems and some sybase raw-devices configured on clarion storage disks. As part of the storage migration we want to copy the data from old storage ( clarion disks)  to new storage ( vmax disks) so that sybase DBA team can start their database from the new SAN storage.

Note : In this setup, we have only SVM volumes but no VxVM/VCS installed.

metadevice d25   — mounted on –> /localfs/app/SYBDB
metadevice d27  — mounted on –> /localfs/SYBDB_dumps
metadevice d28 — mounted on –>  /exportdata/tempdb
metadevice d29 — mounted on –> /exportdata/tempdb1

gurkulSolaris:root# df -kh | egrep -i “local|export”

Filesystem size used avail capacity Mounted on 
/dev/md/dsk/d27 24G 22G 1.8G 93% /exportdata/tempdb1 
/dev/md/dsk/d29 19G 16G 3.2G 84% /exportdata/tempdb 
/dev/md/dsk/d28 33G 1.4G 32G 5% /localfs/SYBDB_dumps_1 
/dev/md/dsk/d25 2.9G 1.7G 1.1G 62% /localfs/app/SYBDB

 

Initial Disk Configuration, Before The Migration

Please note that we have two device paths for each disks configured from the SAN storage, as shown in the diagram. And the actual EMC powerdevices ( emcpower0a – 6a ) will give you the idea about total number of physical disks connected to the machine

gurkulSolaris:root# echo|format 

Searching for disks…done
AVAILABLE DISK SELECTIONS:
0. c1t0d0 <SUN36G cyl 24620 alt 2 hd 27 sec 107>
/pci@1f,700000/scsi@2/sd@0,0
1. c1t1d0 <SUN36G cyl 24620 alt 2 hd 27 sec 107>
/pci@1f,700000/scsi@2/sd@1,0
2. c1t2d0 <SUN36G cyl 24620 alt 2 hd 27 sec 107>
/pci@1f,700000/scsi@2/sd@2,0
3. c1t3d0 <SUN36G cyl 24620 alt 2 hd 27 sec 107>
/pci@1f,700000/scsi@2/sd@3,0
4. c3t5006016130601837d0 <DGC-RAID5-0207 cyl 49150 alt 2 hd 128 sec 10> power0a
/pci@1c,600000/lpfc@1/fp@0,0/ssd@w5006016130601837,0
5. c3t5006016930601837d0 <DGC-RAID5-0207 cyl 49150 alt 2 hd 128 sec 10> power0a
/pci@1c,600000/lpfc@1/fp@0,0/ssd@w5006016930601837,0
6. c3t5006016130601837d1 <DGC-RAID5-0207 cyl 61438 alt 2 hd 256 sec 10> power1a
/pci@1c,600000/lpfc@1/fp@0,0/ssd@w5006016130601837,1
7. c3t5006016930601837d1 <DGC-RAID5-0207 cyl 61438 alt 2 hd 256 sec 10> power1a
/pci@1c,600000/lpfc@1/fp@0,0/ssd@w5006016930601837,1
8. c3t5006016130601837d2 <DGC-RAID5-0207 cyl 61438 alt 2 hd 256 sec 10> power2a
/pci@1c,600000/lpfc@1/fp@0,0/ssd@w5006016130601837,2
9. c3t5006016930601837d2 <DGC-RAID5-0207 cyl 61438 alt 2 hd 256 sec 10> power2a
/pci@1c,600000/lpfc@1/fp@0,0/ssd@w5006016930601837,2
10. c3t5006016930601837d3 <DGC-RAID5-0207 cyl 61438 alt 2 hd 256 sec 10> power3a
/pci@1c,600000/lpfc@1/fp@0,0/ssd@w5006016930601837,3
11. c3t5006016130601837d3 <DGC-RAID5-0207 cyl 61438 alt 2 hd 256 sec 10> power3a
/pci@1c,600000/lpfc@1/fp@0,0/ssd@w5006016130601837,3
12. c3t5006016130601837d4 <DGC-RAID5-0207 cyl 61438 alt 2 hd 256 sec 10> power4a
/pci@1c,600000/lpfc@1/fp@0,0/ssd@w5006016130601837,4
13. c3t5006016930601837d4 <DGC-RAID5-0207 cyl 61438 alt 2 hd 256 sec 10> power4a
/pci@1c,600000/lpfc@1/fp@0,0/ssd@w5006016930601837,4
14. c3t5006016130601837d5 <DGC-RAID5-0207 cyl 61438 alt 2 hd 256 sec 10> power5a
/pci@1c,600000/lpfc@1/fp@0,0/ssd@w5006016130601837,5
15. c3t5006016930601837d5 <DGC-RAID5-0207 cyl 61438 alt 2 hd 256 sec 10> power5a
/pci@1c,600000/lpfc@1/fp@0,0/ssd@w5006016930601837,5
16. c3t5006016930601837d6 <DGC-RAID5-0207 cyl 61438 alt 2 hd 256 sec 10> power6a
/pci@1c,600000/lpfc@1/fp@0,0/ssd@w5006016930601837,6
17. c3t5006016130601837d6 <DGC-RAID5-0207 cyl 61438 alt 2 hd 256 sec 10> power6a
/pci@1c,600000/lpfc@1/fp@0,0/ssd@w5006016130601837,6
18. emcpower0h <DGC-RAID5-0207 cyl 49150 alt 2 hd 128 sec 10> power0a
/pseudo/emcp@0
19. emcpower1h <DGC-RAID5-0207 cyl 61438 alt 2 hd 256 sec 10> power1a
/pseudo/emcp@1
20. emcpower2a <DGC-RAID5-0207 cyl 61438 alt 2 hd 256 sec 10> power2a
/pseudo/emcp@2
21. emcpower3a <DGC-RAID5-0207 cyl 61438 alt 2 hd 256 sec 10> power3a
/pseudo/emcp@3
22. emcpower4h <DGC-RAID5-0207 cyl 61438 alt 2 hd 256 sec 10> power4a
/pseudo/emcp@4
23. emcpower5a <DGC-RAID5-0207 cyl 61438 alt 2 hd 256 sec 10> power5a
/pseudo/emcp@5
24. emcpower6h <DGC-RAID5-0207 cyl 61438 alt 2 hd 256 sec 10> power6a
/pseudo/emcp@6
Specify disk (enter its number): Specify disk (enter its number):

Powerpath Configuration Before Migration

gurkulSolaris:root# powermt display dev=all

Pseudo name=emcpower0a
CLARiiON ID=CLA0123456 [SolarisServer]
Logical device ID=60060160F3F31200D072ABAEF789DA11 [EMC-CX-rg16-lun91-30gb]
state=alive; policy=CLAROpt; priority=0; queued-IOs=0
Owner: default=SP A, current=SP A Array failover mode: 1
==============================================================================
—————- Host ————— – Stor – — I/O Path – — Stats —
### HW Path I/O Paths Interf. Mode State Q-IOs Errors
==============================================================================
3072 pci@1c,600000/lpfc@1/fp@0,0 c3t5006016130601837d0s0 SP A1 active alive 0 0
3072 pci@1c,600000/lpfc@1/fp@0,0 c3t5006016930601837d0s0 SP B1 active alive 0 0

Pseudo name=emcpower1a
CLARiiON ID=CLA0123456 [SolarisServer]
Logical device ID=60060160F3F312006CA64996F789DA11 [EMC-CX-rg17-lun90-75gb]
state=alive; policy=CLAROpt; priority=0; queued-IOs=0
Owner: default=SP B, current=SP B Array failover mode: 1
==============================================================================
—————- Host ————— – Stor – — I/O Path – — Stats —
### HW Path I/O Paths Interf. Mode State Q-IOs Errors
==============================================================================
3072 pci@1c,600000/lpfc@1/fp@0,0 c3t5006016130601837d1s0 SP A1 active alive 0 0
3072 pci@1c,600000/lpfc@1/fp@0,0 c3t5006016930601837d1s0 SP B1 active alive 0 0

Pseudo name=emcpower2a
CLARiiON ID=CLA0123456 [SolarisServer]
Logical device ID=60060160F3F312009081674EF589DA11 [EMC-CX-rg18-lun89-75gb]
state=alive; policy=CLAROpt; priority=0; queued-IOs=0
Owner: default=SP A, current=SP A Array failover mode: 1
==============================================================================
—————- Host ————— – Stor – — I/O Path – — Stats —
### HW Path I/O Paths Interf. Mode State Q-IOs Errors
==============================================================================
3072 pci@1c,600000/lpfc@1/fp@0,0 c3t5006016130601837d2s0 SP A1 active alive 0 0
3072 pci@1c,600000/lpfc@1/fp@0,0 c3t5006016930601837d2s0 SP B1 active alive 0 0

Pseudo name=emcpower3a
CLARiiON ID=CLA0123456 [SolarisServer]
Logical device ID=60060160F3F31200A2CD9F3FF589DA11 [EMC-CX-rg19-lun88-75gb]
state=alive; policy=CLAROpt; priority=0; queued-IOs=0
Owner: default=SP B, current=SP B Array failover mode: 1
==============================================================================
—————- Host ————— – Stor – — I/O Path – — Stats —
### HW Path I/O Paths Interf. Mode State Q-IOs Errors
==============================================================================
3072 pci@1c,600000/lpfc@1/fp@0,0 c3t5006016130601837d3s0 SP A1 active alive 0 0
3072 pci@1c,600000/lpfc@1/fp@0,0 c3t5006016930601837d3s0 SP B1 active alive 0 0

Pseudo name=emcpower4a
CLARiiON ID=CLA0123456 [SolarisServer]
Logical device ID=60060160F3F312008252B11EF589DA11 [EMC-CX-rg20-lun87-75gb]
state=alive; policy=CLAROpt; priority=0; queued-IOs=0
Owner: default=SP A, current=SP A Array failover mode: 1
==============================================================================
—————- Host ————— – Stor – — I/O Path – — Stats —
### HW Path I/O Paths Interf. Mode State Q-IOs Errors
==============================================================================
3072 pci@1c,600000/lpfc@1/fp@0,0 c3t5006016130601837d4s0 SP A1 active alive 0 0
3072 pci@1c,600000/lpfc@1/fp@0,0 c3t5006016930601837d4s0 SP B1 active alive 0 0

Pseudo name=emcpower5a
CLARiiON ID=CLA0123456 [SolarisServer]
Logical device ID=60060160F3F312009A702312F589DA11 [EMC-CX-rg21-lun86-75gb]
state=alive; policy=CLAROpt; priority=0; queued-IOs=0
Owner: default=SP B, current=SP B Array failover mode: 1
==============================================================================
—————- Host ————— – Stor – — I/O Path – — Stats —
### HW Path I/O Paths Interf. Mode State Q-IOs Errors
==============================================================================
3072 pci@1c,600000/lpfc@1/fp@0,0 c3t5006016130601837d5s0 SP A1 active alive 0 0
3072 pci@1c,600000/lpfc@1/fp@0,0 c3t5006016930601837d5s0 SP B1 active alive 0 0

Pseudo name=emcpower6a
CLARiiON ID=CLA0123456 [SolarisServer]
Logical device ID=60060160F3F3120096A061DFF489DA11 [EMC-CX-rg22-lun85-75gb]
state=alive; policy=CLAROpt; priority=0; queued-IOs=0
Owner: default=SP A, current=SP A Array failover mode: 1
==============================================================================
—————- Host ————— – Stor – — I/O Path – — Stats —
### HW Path I/O Paths Interf. Mode State Q-IOs Errors
==============================================================================
3072 pci@1c,600000/lpfc@1/fp@0,0 c3t5006016130601837d6s0 SP A1 active alive 0 0
3072 pci@1c,600000/lpfc@1/fp@0,0 c3t5006016930601837d6s0 SP B1 active alive 0 0

Current Mount Information

gurkulSolaris:root# cat /etc/vfstab
#device device mount FS fsck mount mount
#to mount to fsck point type pass at boot options
#
fd – /dev/fd fd – no –
/proc – /proc proc – no –
/dev/md/dsk/d1 – – swap – no –
/dev/md/dsk/d0 /dev/md/rdsk/d0 / ufs 1 no –
/dev/md/dsk/d3 /dev/md/rdsk/d3 /var ufs 1 no –
/dev/md/dsk/d6 /dev/md/rdsk/d6 /localfs ufs 2 yes –
/dev/md/dsk/d27 /dev/md/rdsk/d27 /exportdata/tempdb1 ufs 2 yes –
/dev/md/dsk/d29 /dev/md/rdsk/d29 /exportdata/tempdb ufs 2 yes logging

 

Disk Controller configuration Before Migration

gurkulSolaris:root# cfgadm -al
Ap_Id Type Receptacle Occupant Condition
c0 scsi-bus connected configured unknown
c0::dsk/c0t0d0 CD-ROM connected configured unknown
c1 scsi-bus connected configured unknown
c1::dsk/c1t0d0 disk connected configured unknown
c1::dsk/c1t1d0 disk connected configured unknown
c1::dsk/c1t2d0 disk connected configured unknown
c1::dsk/c1t3d0 disk connected configured unknown
c2 scsi-bus connected unconfigured unknown
c3 fc-fabric connected configured unknown
c3::500009720829b920 disk connected configured unknown
c3::5006016130601837 disk connected configured unknown
c3::5006016930601837 disk connected configured unknown

 

Before going to Actual migration please take below configuration backup

gurkulSolaris:root# cd /var/tmp/CX700-VMAX-Migration/ 
gurkulSolaris:root# echo|format > format.out 
gurkulSolaris:root# powermt display dev=all > powermt.out 
gurkulSolaris:root# df -kl > df-kl.out 
gurkulSolaris:root# cp /etc/vfstab vfsab.old
gurkulSolaris:root# powermt display dev=all|egrep “Pseudo|CLARiiON|Logical” > Old-cx-lin-info

 

In Current Server We have Four Filesystems as given below

metadevice d25   — mounted on –> /localfs/app/SYBDB
metadevice d27  — mounted on –> /localfs/SYBDB_dumps
metadevice d28 — mounted on –>  /exportdata/tempdb
metadevice d29 — mounted on –> /exportdata/tempdb1

gurkulSolaris:root# df -kh | egrep -i “local|export”

Filesystem size used avail capacity Mounted on 
/dev/md/dsk/d27 24G 22G 1.8G 93% /exportdata/tempdb1 
/dev/md/dsk/d29 19G 16G 3.2G 84% /exportdata/tempdb 
/dev/md/dsk/d28 33G 1.4G 32G 5% /localfs/SYBDB_dumps_1 
/dev/md/dsk/d25 2.9G 1.7G 1.1G 62% /localfs/app/SYBDB

gurkulSolaris:root# metastat -p 
d25 -m d15 1 
d15 1 1 /dev/dsk/emcpower3f 
d27 -m d17 1 
d17 1 1 /dev/dsk/emcpower3e 
d28 -m d18 d38 1 
d18 1 1 c1t2d0s0 
d38 1 1 c1t3d0s0 
d29 -m d19 1 
d19 1 1 /dev/dsk/emcpower6a

As part of this Storage Migraiton, we want to move above 4 filesystems from Clarion Storage to VMAX storage. For that purpose  we will  create new filesystems on VMAX strorage, and mount them on different mount points  to facilitate the data copy from old storage to new storage. So we create new mount points as below


gurkulSolaris:root# mkdir /exportdata/tempdb1-new
gurkulSolaris:root# mkdir /exportdata/tempdb-new
gurkulSolaris:root# mkdir /localfs/SYBDB_dumps_1-new
gurkulSolaris:root# mkdir /localfs/app/SYBDB-new

 

Storage Migration Procedure

 

Step 1 :  SAN team will do the necessary tasks like  Zoning and assigning disks to host, before we  actually detect new SAN Devices from the Solaris side

Step 2 :  Detecting New storage from solaris 

Current system Information, Just for information

# uname -r
SunOS gurkulSolaris.UNX.unixadminschool.com 5.10 Generic_137111-07 sun4u sparc SUNW,Sun-Fire-V440 

Detecting New Storage, Without Reboot.  (All the New Disks References were Highligted in Yellow )

gurkulSolaris:root# devfsadm
gurkulSolaris:root# echo|format
Searching for disks…done
c3t500009720829B920d0: configured with capacity of 20.00GB
c3t500009720829B920d1: configured with capacity of 20.00GB
c3t500009720829B920d2: configured with capacity of 36.00GB
c3t500009720829B920d3: configured with capacity of 36.00GB
c3t500009720829B920d4: configured with capacity of 117.00GB
c3t500009720829B920d5: configured with capacity of 65.00GB
c3t500009720829B920d6: configured with capacity of 25.00GB
c3t500009720829B920d7: configured with capacity of 5.00GB

AVAILABLE DISK SELECTIONS:
0. c1t0d0 <SUN36G cyl 24620 alt 2 hd 27 sec 107>
/pci@1f,700000/scsi@2/sd@0,0
1. c1t1d0 <SUN36G cyl 24620 alt 2 hd 27 sec 107>
/pci@1f,700000/scsi@2/sd@1,0
2. c1t2d0 <SUN36G cyl 24620 alt 2 hd 27 sec 107>
/pci@1f,700000/scsi@2/sd@2,0
3. c1t3d0 <SUN36G cyl 24620 alt 2 hd 27 sec 107>
/pci@1f,700000/scsi@2/sd@3,0
4. c3t500009720829B920d0 <EMC-SYMMETRIX-5874 cyl 21844 alt 2 hd 15 sec 128>
/pci@1c,600000/lpfc@1/fp@0,0/ssd@w500009720829b920,0
5. c3t500009720829B920d1 <EMC-SYMMETRIX-5874 cyl 21844 alt 2 hd 15 sec 128>
/pci@1c,600000/lpfc@1/fp@0,0/ssd@w500009720829b920,1
6. c3t500009720829B920d2 <EMC-SYMMETRIX-5874 cyl 39320 alt 2 hd 15 sec 128>
/pci@1c,600000/lpfc@1/fp@0,0/ssd@w500009720829b920,2
7. c3t500009720829B920d3 <EMC-SYMMETRIX-5874 cyl 39320 alt 2 hd 15 sec 128>
/pci@1c,600000/lpfc@1/fp@0,0/ssd@w500009720829b920,3
8. c3t500009720829B920d4 <EMC-SYMMETRIX-5874 cyl 63896 alt 2 hd 30 sec 128>
/pci@1c,600000/lpfc@1/fp@0,0/ssd@w500009720829b920,4
9. c3t500009720829B920d5 <EMC-SYMMETRIX-5874 cyl 35497 alt 2 hd 30 sec 128>
/pci@1c,600000/lpfc@1/fp@0,0/ssd@w500009720829b920,5
10. c3t500009720829B920d6 <EMC-SYMMETRIX-5874 cyl 27305 alt 2 hd 15 sec 128>
/pci@1c,600000/lpfc@1/fp@0,0/ssd@w500009720829b920,6
11. c3t500009720829B920d7 <EMC-SYMMETRIX-5874 cyl 5460 alt 2 hd 15 sec 128>
/pci@1c,600000/lpfc@1/fp@0,0/ssd@w500009720829b920,7
12. c3t5006016930601837d0 <DGC-RAID5-0207 cyl 49150 alt 2 hd 128 sec 10> power0a
/pci@1c,600000/lpfc@1/fp@0,0/ssd@w5006016930601837,0
13. c3t5006016130601837d0 <DGC-RAID5-0207 cyl 49150 alt 2 hd 128 sec 10> power0a
/pci@1c,600000/lpfc@1/fp@0,0/ssd@w5006016130601837,0
14. c3t5006016130601837d1 <DGC-RAID5-0207 cyl 61438 alt 2 hd 256 sec 10> power1a
/pci@1c,600000/lpfc@1/fp@0,0/ssd@w5006016130601837,1
15. c3t5006016930601837d1 <DGC-RAID5-0207 cyl 61438 alt 2 hd 256 sec 10> power1a
/pci@1c,600000/lpfc@1/fp@0,0/ssd@w5006016930601837,1
16. c3t5006016130601837d2 <DGC-RAID5-0207 cyl 61438 alt 2 hd 256 sec 10> power2a
/pci@1c,600000/lpfc@1/fp@0,0/ssd@w5006016130601837,2
17. c3t5006016930601837d2 <DGC-RAID5-0207 cyl 61438 alt 2 hd 256 sec 10> power2a
/pci@1c,600000/lpfc@1/fp@0,0/ssd@w5006016930601837,2
18. c3t5006016130601837d3 <DGC-RAID5-0207 cyl 61438 alt 2 hd 256 sec 10> power3a
/pci@1c,600000/lpfc@1/fp@0,0/ssd@w5006016130601837,3
19. c3t5006016930601837d3 <DGC-RAID5-0207 cyl 61438 alt 2 hd 256 sec 10> power3a
/pci@1c,600000/lpfc@1/fp@0,0/ssd@w5006016930601837,3
20. c3t5006016930601837d4 <DGC-RAID5-0207 cyl 61438 alt 2 hd 256 sec 10> power4a
/pci@1c,600000/lpfc@1/fp@0,0/ssd@w5006016930601837,4
21. c3t5006016130601837d4 <DGC-RAID5-0207 cyl 61438 alt 2 hd 256 sec 10> power4a
/pci@1c,600000/lpfc@1/fp@0,0/ssd@w5006016130601837,4
22. c3t5006016130601837d5 <DGC-RAID5-0207 cyl 61438 alt 2 hd 256 sec 10> power5a
/pci@1c,600000/lpfc@1/fp@0,0/ssd@w5006016130601837,5
23. c3t5006016930601837d5 <DGC-RAID5-0207 cyl 61438 alt 2 hd 256 sec 10> power5a
/pci@1c,600000/lpfc@1/fp@0,0/ssd@w5006016930601837,5
24. c3t5006016930601837d6 <DGC-RAID5-0207 cyl 61438 alt 2 hd 256 sec 10> power6a
/pci@1c,600000/lpfc@1/fp@0,0/ssd@w5006016930601837,6
25. c3t5006016130601837d6 <DGC-RAID5-0207 cyl 61438 alt 2 hd 256 sec 10> power6a
/pci@1c,600000/lpfc@1/fp@0,0/ssd@w5006016130601837,6
26. emcpower0h <DGC-RAID5-0207 cyl 49150 alt 2 hd 128 sec 10> power0a
/pseudo/emcp@0
27. emcpower1h <DGC-RAID5-0207 cyl 61438 alt 2 hd 256 sec 10> power1a
/pseudo/emcp@1
28. emcpower2a <DGC-RAID5-0207 cyl 61438 alt 2 hd 256 sec 10> power2a
/pseudo/emcp@2
29. emcpower3a <DGC-RAID5-0207 cyl 61438 alt 2 hd 256 sec 10> power3a
/pseudo/emcp@3
30. emcpower
4h <DGC-RAID5-0207 cyl 61438 alt 2 hd 256 sec 10> power4a

/pseudo/emcp@4
31. emcpower5a <DGC-RAID5-0207 cyl 61438 alt 2 hd 256 sec 10> power5a
/pseudo/emcp@5
32. emcpower6h <DGC-RAID5-0207 cyl 61438 alt 2 hd 256 sec 10> power6a
/pseudo/emcp@6
Specify disk (enter its number): Specify disk (enter its number):

 Please note that the new VMAX devices are showing  as just native Solaris disks, but the emcpowerpath devices are not yet configured for the new devices. We will configure them by using  emc powerpath commands in next section

Check the Powerpath information for new disks. Please note that new disks can be identifies by  Symmetrix  ID and Logical Device ID ( i.e. LUN) number. And each device will have two device paths.

gurkulSolaris:root# powermt display dev=all
Symmetrix ID=SYM001010101
Logical device ID=0FAF
state=alive; policy=SymmOpt; priority=0; queued-IOs=0
==============================================================================
—————- Host ————— – Stor – — I/O Path – — Stats —
### HW Path I/O Paths Interf. Mode State Q-IOs Errors
==============================================================================
3072 pci@1c,600000/lpfc@1/fp@0,0 c3t500009720829B920d0s0 FA 9eA active alive 0 0
3074 pci@1d,700000/lpfc@1/fp@0,0 c4t500009720829B91Dd0s0 FA 8eB active alive 0 0

Symmetrix ID=SYM001010101
Logical device ID=0FB0
state=alive; policy=SymmOpt; priority=0; queued-IOs=0
==============================================================================
—————- Host ————— – Stor – — I/O Path – — Stats —
### HW Path I/O Paths Interf. Mode State Q-IOs Errors
==============================================================================
3072 pci@1c,600000/lpfc@1/fp@0,0 c3t500009720829B920d1s0 FA 9eA active alive 0 0
3074 pci@1d,700000/lpfc@1/fp@0,0 c4t500009720829B91Dd1s0 FA 8eB active alive 0 0

Symmetrix ID=SYM001010101
Logical device ID=0FB1
state=alive; policy=SymmOpt; priority=0; queued-IOs=0
==============================================================================
—————- Host ————— – Stor – — I/O Path – — Stats —
### HW Path I/O Paths Interf. Mode State Q-IOs Errors
==============================================================================
3072 pci@1c,600000/lpfc@1/fp@0,0 c3t500009720829B920d2s0 FA 9eA active alive 0 0
3074 pci@1d,700000/lpfc@1/fp@0,0 c4t500009720829B91Dd2s0 FA 8eB active alive 0 0

Symmetrix ID=SYM001010101
Logical device ID=0FB2
state=alive; policy=SymmOpt; priority=0; queued-IOs=0
==============================================================================
—————- Host ————— – Stor – — I/O Path – — Stats —
### HW Path I/O Paths Interf. Mode State Q-IOs Errors
==============================================================================
3072 pci@1c,600000/lpfc@1/fp@0,0 c3t500009720829B920d3s0 FA 9eA active alive 0 0
3074 pci@1d,700000/lpfc@1/fp@0,0 c4t500009720829B91Dd3s0 FA 8eB active alive 0 0

Symmetrix ID=SYM001010101
Logical device ID=0FB3
state=alive; policy=SymmOpt; priority=0; queued-IOs=0
==============================================================================
—————- Host ————— – Stor – — I/O Path – — Stats —
### HW Path I/O Paths Interf. Mode State Q-IOs Errors
==============================================================================
3072 pci@1c,600000/lpfc@1/fp@0,0 c3t500009720829B920d4s0 FA 9eA active alive 0 0
3074 pci@1d,700000/lpfc@1/fp@0,0 c4t500009720829B91Dd4s0 FA 8eB active alive 0 0

Symmetrix ID=SYM001010101
Logical device ID=0FB4
state=alive; policy=SymmOpt; priority=0; queued-IOs=0
==============================================================================
—————- Host ————— – Stor – — I/O Path – — Stats —
### HW Path I/O Paths Interf. Mode State Q-IOs Errors
==============================================================================
3072 pci@1c,600000/lpfc@1/fp@0,0 c3t500009720829B920d5s0 FA 9eA active alive 0 0
3074 pci@1d,700000/lpfc@1/fp@0,0 c4t500009720829B91Dd5s0 FA 8eB active alive 0 0

Symmetrix ID=SYM001010101
Logical device ID=0FB5
state=alive; policy=SymmOpt; priority=0; queued-IOs=0
==============================================================================
—————- Host ————— – Stor – — I/O Path – — Stats —
### HW Path I/O Paths Interf. Mode State Q-IOs Errors
==============================================================================
3072 pci@1c,600000/lpfc@1/fp@0,0 c3t500009720829B920d6s0 FA 9eA active alive 0 0
3074 pci@1d,700000/lpfc@1/fp@0,0 c4t500009720829B91Dd6s0 FA 8eB active alive 0 0

Symmetrix ID=SYM001010101
Logical device ID=0FB6
state=alive; policy=SymmOpt; priority=0; queued-IOs=0
==============================================================================
—————- Host ————— – Stor – — I/O Path – — Stats —
### HW Path I/O Paths Interf. Mode State Q-IOs Errors
==============================================================================
3072 pci@1c,600000/lpfc@1/fp@0,0 c3t500009720829B920d7s0 FA 9eA active alive 0 0
3074 pci@1d,700000/lpfc@1/fp@0,0 c4t500009720829B91Dd7s0 FA 8eB active alive 0 0

Pseudo name=emcpower1a
CLARiiON ID=CLA0123456 [SolarisServer]
Logical device ID=60060160F3F312006CA64996F789DA11 [EMC-CX-rg17-lun90-75gb]
state=alive; policy=CLAROpt; priority=0; queued-IOs=0
Owner: default=SP B, current=SP B Array failover mode: 1
==============================================================================
—————- Host ————— – Stor – — I/O Path – — Stats —
### HW Path I/O Paths Interf. Mode State Q-IOs Errors
==============================================================================
3072 pci@1c,600000/lpfc@1/fp@0,0 c3t5006016130601837d1s0 SP A1 active alive 0 0
3072 pci@1c,600000/lpfc@1/fp@0,0 c3t5006016930601837d1s0 SP B1 active alive 0 0

Pseudo name=emcpower4a
CLARiiON ID=CLA0123456 [SolarisServer]
Logical device ID=60060160F3F312008252B11EF589DA11 [EMC-CX-rg20-lun87-75gb]
state=alive; policy=CLAROpt; priority=0; queued-IOs=0
Owner: default=SP A, current=SP A Array failover mode: 1
==============================================================================
—————- Host ————— – Stor – — I/O Path – — Stats —
### HW Path I/O Paths Interf. Mode State Q-IOs Errors
==============================================================================
3072 pci@1c,600000/lpfc@1/fp@0,0 c3t5006016130601837d4s0 SP A1 active alive 0 0
3072 pci@1c,600000/lpfc@1/fp@0,0 c3t5006016930601837d4s0 SP B1 active alive 0 0

Pseudo name=emcpower2a
CLARiiON ID=CLA0123456 [SolarisServer]
Logical device ID=60060160F3F312009081674EF589DA11 [EMC-CX-rg18-lun89-75gb]
state=alive; policy=CLAROpt; priority=0; queued-IOs=0
Owner: default=SP A, current=SP A Array failover mode: 1
==============================================================================
—————- Host ————— – Stor – — I/O Path – — Stats —
### HW Path I/O Paths Interf. Mode State Q-IOs Errors
==============================================================================
3072 pci@1c,600000/lpfc@1/fp@0,0 c3t5006016130601837d2s0 SP A1 active alive 0 0
3072 pci@1c,600000/lpfc@1/fp@0,0 c3t5006016930601837d2s0 SP B1 active alive 0 0

Pseudo name=emcpower6a
CLARiiON ID=CLA0123456 [SolarisServer]
Logical device ID=60060160F3F3120096A061DFF489DA11 [EMC-CX-lun85-75gb]
state=alive; policy=CLAROpt; priority=0; queued-IOs=0
Owner: default=SP A, current=SP A Array failover mode: 1
==============================================================================
—————- Host ————— – Stor – — I/O Path – — Stats —
### HW Path I/O Paths Interf. Mode State Q-IOs Errors
==============================================================================
3072 pci@1c,600000/lpfc@1/fp@0,0 c3t5006016130601837d6s0 SP A1 active alive 0 0
3072 pci@1c,600000/lpfc@1/fp@0,0 c3t5006016930601837d6s0 SP B1 active alive 0 0

Pseudo name=emcpower5a
CLARiiON ID=CLA0123456 [SolarisServer]
Logical device ID=60060160F3F312009A702312F589DA11 [EMC-CX-rg21-lun86-75gb]
state=alive; policy=CLAROpt; priority=0; queued-IOs=0
Owner: default=SP B, current=SP B Array failover mode: 1
==============================================================================
—————- Host ————— – Stor – — I/O Path – — Stats —
### HW Path I/O Paths Interf. Mode State Q-IOs Errors
==============================================================================
3072 pci@1c,600000/lpfc@1/fp@0,0 c3t5006016130601837d5s0 SP A1 active alive 0 0
3072 pci@1c,600000/lpfc@1/fp@0,0 c3t5006016930601837d5s0 SP B1 active alive 0 0

Pseudo name=emcpower3a
CLARiiON ID=CLA0123456 [SolarisServer]
Logical device ID=60060160F3F31200A2CD9F3FF589DA11 [EMC-CX-rg19-lun88-75gb]
state=alive; policy=CLAROpt; priority=0; queued-IOs=0
Owner: default=SP B, current=SP B Array failover mode: 1
==============================================================================
—————- Host ————— – Stor – — I/O Path – — Stats —
### HW Path I/O Paths Interf. Mode State Q-IOs Errors
==============================================================================
3072 pci@1c,600000/lpfc@1/fp@0,0 c3t5006016130601837d3s0 SP A1 active alive 0 0
3072 pci@1c,600000/lpfc@1/fp@0,0 c3t5006016930601837d3s0 SP B1 active alive 0 0

Pseudo name=emcpower0a
CLARiiON ID=CLA0123456 [SolarisServer]
Logical device ID=60060160F3F31200D072ABAEF789DA11 [EMC-CX-rg16-lun91-30gb]
state=alive; policy=CLAROpt; priority=0; queued-IOs=0
Owner: default=SP A, current=SP A Array failover mode: 1
==============================================================================
—————- Host ————— – Stor – — I/O Path – — Stats —
### HW Path I/O Paths Interf. Mode State Q-IOs Errors
==============================================================================
3072 pci@1c,600000/lpfc@1/fp@0,0 c3t5006016130601837d0s0 SP A1 active alive 0 0
3072 pci@1c,600000/lpfc@1/fp@0,0 c3t5006016930601837d0s0 SP B1 active alive 0 0

Just for Quick Reference Outout about new LUN devices 

gurkulSolaris:root# powermt display dev=all|egrep ‘Pseudo|CLARiiON|Logical’ 
Logical device ID=0FAF
Logical device ID=0FB0
Logical device ID=0FB1
Logical device ID=0FB2
Logical device ID=0FB3
Logical device ID=0FB4
Logical device ID=0FB5
Logical device ID=0FB6
Pseudo name=emcpower1a
CLARiiON ID=CLA0123456 [SolarisServer]
Logical device ID=60060160F3F312006CA64996F789DA11 [EMC-CX-rg17-lun90-75gb]
Pseudo name=emcpower4a
CLARiiON ID=CLA0123456 [SolarisServer]
Logical device ID=60060160F3F312008252B11EF589DA11 [EMC-CX-rg20-lun87-75gb]
Pseudo name=emcpower2a
CLARiiON ID=CLA0123456 [SolarisServer]
Logical device ID=60060160F3F312009081674EF589DA11 [EMC-CX-rg18-lun89-75gb]
Pseudo name=emcpower6a
CLARiiON ID=CLA0123456 [SolarisServer]
Logical device ID=60060160F3F3120096A061DFF489DA11 [EMC-CX-lun85-75gb]
Pseudo name=emcpower5a
CLARiiON ID=CLA0123456 [SolarisServer]
Logical device ID=60060160F3F312009A702312F589DA11 [EMC-CX-rg21-lun86-75gb]
Pseudo name=emcpower3a
CLARiiON ID=CLA0123456 [SolarisServer]
Logical device ID=60060160F3F31200A2CD9F3FF589DA11 [EMC-CX-rg19-lun88-75gb]
Pseudo name=emcpower0a
CLARiiON ID=CLA0123456 [SolarisServer]
Logical device ID=60060160F3F31200D072ABAEF789DA11 [EMC-CX-rg16-lun91-30gb]

Recreate device paths for new devices and Save the Powerpath configuration using below commands ( for information on powerpath please refer the post  EMC Power Path commands for system administrators )

gurkulSolaris:root# powercf -q
Could not find config file entry for:
—————————————
volume ID = 0SYM001010101faf
—————————————
adding emcpower14
Could not find config file entry for:
—————————————
volume ID = 0SYM001010101fb0
—————————————
adding emcpower13
Could not find config file entry for:
—————————————
volume ID = 0SYM001010101fb1
—————————————
adding emcpower12
Could not find config file entry for:
—————————————
volume ID = 0SYM001010101fb2
—————————————
adding emcpower11
Could not find config file entry for:
—————————————
volume ID = 0SYM001010101fb3
—————————————
adding emcpower10
Could not find config file entry for:
—————————————
volume ID = 0SYM001010101fb4
—————————————
adding emcpower9
Could not find config file entry for:
—————————————
volume ID = 0SYM001010101fb5
—————————————
adding emcpower8
Could not find config file entry for:
—————————————
volume ID = 0SYM001010101fb6
—————————————
adding emcpower7

gurkulSolaris:root# powermt config
gurkulSolaris:root# powermt save
gurkulSolaris:root# powermt display dev=all|egrep ‘Pseudo|CLARiiON|Logical’
Pseudo name=emcpower14a
Logical device ID=0FAF
Pseudo name=emcpower13a
Logical device ID=0FB0
Pseudo name=emcpower12a
Logical device ID=0FB1
Pseudo name=emcpower11a
Logical device ID=0FB2
Pseudo name=emcpower10a
Logical device ID=0FB3
Pseudo name=emcpower9a
Logical device ID=0FB4
Pseudo name=emcpower8a
Logical device ID=0FB5
Pseudo name=emcpower7a
Logical device ID=0FB6
Pseudo name=emcpower1a
CLARiiON ID=CLA0123456 [SolarisServer]
Logical device ID=60060160F3F312006CA64996F789DA11 [EMC-CX-rg17-lun90-75gb]
Pseudo name=emcpower4a
CLARiiON ID=CLA0123456 [SolarisServer]
Logical device ID=60060160F3F312008252B11EF589DA11 [EMC-CX-rg20-lun87-75gb]
Pseudo name=emcpower2a
CLARiiON ID=CLA0123456 [SolarisServer]
Logical device ID=60060160F3F312009081674EF589DA11 [EMC-CX-rg18-lun89-75gb]
Pseudo name=emcpower6a
CLARiiON ID=CLA0123456 [SolarisServer]
Logical device ID=60060160F3F3120096A061DFF489DA11 [EMC-CX-lun85-75gb]
Pseudo name=emcpower5a
CLARiiON ID=CLA0123456 [SolarisServer]
Logical device ID=60060160F3F312009A702312F589DA11 [EMC-CX-rg21-lun86-75gb]
Pseudo name=emcpower3a
CLARiiON ID=CLA0123456 [SolarisServer]
Logical device ID=60060160F3F31200A2CD9F3FF589DA11 [EMC-CX-rg19-lun88-75gb]
Pseudo name=emcpower0a
CLARiiON ID=CLA0123456 [SolarisServer]
Logical device ID=60060160F3F31200D072ABAEF789DA11 [EMC-CX-rg16-lun91-30gb]

Now Verify that Format Command showing the emcpowerpath devices for the VMAX storage

gurkulSolaris:root# echo|format
Searching for disks…done
c3t500009720829B920d0: configured with capacity of 20.00GB
c3t500009720829B920d1: configured with capacity of 20.00GB
c3t500009720829B920d2: configured with capacity of 36.00GB
c3t500009720829B920d3: configured with capacity of 36.00GB
c3t500009720829B920d4: configured with capacity of 117.00GB
c3t500009720829B920d5: configured with capacity of 65.00GB
c3t500009720829B920d6: configured with capacity of 25.00GB
c3t500009720829B920d7: configured with capacity of 5.00GB
emcpower7a: configured with capacity of 5.00GB
emcpower8a: configured with capacity of 25.00GB
emcpower9a: configured with capacity of 65.00GB
emcpower10a: configured with capacity of 117.00GB
emcpower11a: configured with capacity of 36.00GB
emcpower12a: configured with capacity of 36.00GB
emcpower13a: configured with capacity of 20.00GB
emcpower14a: configured with capacity of 20.00GB

AVAILABLE DISK SELECTIONS:
0. c1t0d0 <SUN36G cyl 24620 alt 2 hd 27 sec 107>
/pci@1f,700000/scsi@2/sd@0,0
1. c1t1d0 <SUN36G cyl 24620 alt 2 hd 27 sec 107>
/pci@1f,700000/scsi@2/sd@1,0
2. c1t2d0 <SUN36G cyl 24620 alt 2 hd 27 sec 107>
/pci@1f,700000/scsi@2/sd@2,0
3. c1t3d0 <SUN36G cyl 24620 alt 2 hd 27 sec 107>
/pci@1f,700000/scsi@2/sd@3,0
4. c3t500009720829B920d0 <EMC-SYMMETRIX-5874 cyl 21844 alt 2 hd 15 sec 128>
/pci@1c,600000/lpfc@1/fp@0,0/ssd@w500009720829b920,0
5. c3t500009720829B920d1 <EMC-SYMMETRIX-5874 cyl 21844 alt 2 hd 15 sec 128>
/pci@1c,600000/lpfc@1/fp@0,0/ssd@w500009720829b920,1
6. c3t500009720829B920d2 <EMC-SYMMETRIX-5874 cyl 39320 alt 2 hd 15 sec 128>
/pci@1c,600000/lpfc@1/fp@0,0/ssd@w500009720829b920,2
7. c3t500009720829B920d3 <EMC-SYMMETRIX-5874 cyl 39320 alt 2 hd 15 sec 128>
/pci@1c,600000/lpfc@1/fp@0,0/ssd@w500009720829b920,3
8. c3t500009720829B920d4 <EMC-SYMMETRIX-5874 cyl 63896 alt 2 hd 30 sec 128>
/pci@1c,600000/lpfc@1/fp@0,0/ssd@w500009720829b920,4
9. c3t500009720829B920d5 <EMC-SYMMETRIX-5874 cyl 35497 alt 2 hd 30 sec 128>
/pci@1c,600000/lpfc@1/fp@0,0/ssd@w500009720829b920,5
10. c3t500009720829B920d6 <EMC-SYMMETRIX-5874 cyl 27305 alt 2 hd 15 sec 128>
/pci@1c,600000/lpfc@1/fp@0,0/ssd@w500009720829b920,6
11. c3t500009720829B920d7 <EMC-SYMMETRIX-5874 cyl 5460 alt 2 hd 15 sec 128>
/pci@1c,600000/lpfc@1/fp@0,0/ssd@w500009720829b920,7
12. c3t5006016930601837d0 <DGC-RAID5-0207 cyl 49150 alt 2 hd 128 sec 10> power0a
/pci@1c,600000/lpfc@1/fp@0,0/ssd@w5006016930601837,0
13. c3t5006016130601837d0 <DGC-RAID5-0207 cyl 49150 alt 2 hd 128 sec 10> power0a
/pci@1c,600000/lpfc@1/fp@0,0/ssd@w5006016130601837,0
14. c3t5006016930601837d1 <DGC-RAID5-0207 cyl 61438 alt 2 hd 256 sec 10> power1a
/pci@1c,600000/lpfc@1/fp@0,0/ssd@w5006016930601837,1
15. c3t5006016130601837d1 <DGC-RAID5-0207 cyl 61438 alt 2 hd 256 sec 10> power1a
/pci@1c,600000/lpfc@1/fp@0,0/ssd@w5006016130601837,1
16. c3t5006016930601837d2 <DGC-RAID5-0207 cyl 61438 alt 2 hd 256 sec 10> power2a
/pci@1c,600000/lpfc@1/fp@0,0/ssd@w5006016930601837,2
17. c3t5006016130601837d2 <DGC-RAID5-0207 cyl 61438 alt 2 hd 256 sec 10> power2a
/pci@1c,600000/lpfc@1/fp@0,0/ssd@w5006016130601837,2
18. c3t5006016130601837d3 <DGC-RAID5-0207 cyl 61438 alt 2 hd 256 sec 10> power3a
/pci@1c,600000/lpfc@1/fp@0,0/ssd@w5006016130601837,3
19. c3t5006016930601837d3 <DGC-RAID5-0207 cyl 61438 alt 2 hd 256 sec 10> power3a
/pci@1c,600000/lpfc@1/fp@0,0/ssd@w5006016930601837,3
20. c3t5006016130601837d4 <DGC-RAID5-0207 cyl 61438 alt 2 hd 256 sec 10> power4a
/pci@1c,600000/lpfc@1/fp@0,0/ssd@w5006016130601837,4
21. c3t5006016930601837d4 <DGC-RAID5-0207 cyl 61438 alt 2 hd 256 sec 10> power4a
/pci@1c,600000/lpfc@1/fp@0,0/ssd@w5006016930601837,4
22. c3t5006016130601837d5 <DGC-RAID5-0207 cyl 61438 alt 2 hd 256 sec 10> power5a
/pci@1c,600000/lpfc@1/fp@0,0/ssd@w5006016130601837,5
23. c3t5006016930601837d5 <DGC-RAID5-0207 cyl 61438 alt 2 hd 256 sec 10> power5a
/pci@1c,600000/lpfc@1/fp@0,0/ssd@w5006016930601837,5
24. c3t5006016930601837d6 <DGC-RAID5-0207 cyl 61438 alt 2 hd 256 sec 10> power6a
/pci@1c,600000/lpfc@1/fp@0,0/ssd@w5006016930601837,6
25. c3t5006016130601837d6 <DGC-RAID5-0207 cyl 61438 alt 2 hd 256 sec 10> power6a
/pci@1c,600000/lpfc@1/fp@0,0/ssd@w5006016130601837,6
26. emcpower0h <DGC-RAID5-0207 cyl 49150 alt 2 hd 128 sec 10> power0a
/pseudo/emcp@0
27. emcpower1h <DGC-RAID5-0207 cyl 61438 alt 2 hd 256 sec 10> power1a
/pseudo/emcp@1
28. emcpower2a <DGC-RAID5-0207 cyl 61438 alt 2 hd 256 sec 10> power2a
/pseudo/emcp@2
29. emcpower3a <DGC-RAID5-0207 cyl 61438 alt 2 hd 256 sec 10> power3a
/pseudo/emcp@3
30. emcpower4h <DGC-RAID5-0207 cyl 61438 alt 2 hd 256 sec 10> power4a
/pseudo/emcp@4
31. emcpower5a <DGC-RAID5-0207 cyl 61438 alt 2 hd 256 sec 10> power5a
/pseudo/emcp@5
32. emcpower6h <DGC-RAID
5-0207 cyl 61438 alt 2 hd 256 sec 10> power6a

/pseudo/emcp@6
33. emcpower7a <EMC-SYMMETRIX-5874 cyl 5460 alt 2 hd 15 sec 128>
/pseudo/emcp@7
34. emcpower8a <EMC-SYMMETRIX-5874 cyl 27305 alt 2 hd 15 sec 128>
/pseudo/emcp@8
35. emcpower9a <EMC-SYMMETRIX-5874 cyl 35497 alt 2 hd 30 sec 128>
/pseudo/emcp@9
36. emcpower10a <EMC-SYMMETRIX-5874 cyl 63896 alt 2 hd 30 sec 128>
/pseudo/emcp@10
37. emcpower11a <EMC-SYMMETRIX-5874 cyl 39320 alt 2 hd 15 sec 128>
/pseudo/emcp@11
38. emcpower12a <EMC-SYMMETRIX-5874 cyl 39320 alt 2 hd 15 sec 128>
/pseudo/emcp@12
39. emcpower13a <EMC-SYMMETRIX-5874 cyl 21844 alt 2 hd 15 sec 128>
/pseudo/emcp@13
40. emcpower14a <EMC-SYMMETRIX-5874 cyl 21844 alt 2 hd 15 sec 128>
/pseudo/emcp@14
Specify disk (enter its number): Specify disk (enter its number):

Just label all new devices using format command. And also format each disk to create a Single slice If new disks are having disk space allocated to multiple slices

gurkulSolaris:root# format emcpower7a
emcpower7a: configured with capacity of 5.00GB
selecting emcpower7a
[disk formatted]

FORMAT MENU:
disk – select a disk
type – select (define) a disk type
partition – select (define) a partition table
current – describe the current disk
format – format and analyze the disk
repair – repair a defective sector
label – write label to the disk
analyze – surface analysis
defect – defect list management
backup – search for backup labels
verify – read and display labels
save – save new disk/partition definitions
inquiry – show vendor, product and revision
volname – set 8-character volume name
!<cmd> – execute <cmd>, then return
quit
format> label
Ready to label disk, continue? y
format> q

Create Filesystems to all the new disks as shown below

gurkulSolaris:root# newfs /dev/dsk/emcpower8a
newfs: /dev/rdsk/emcpower8a last mounted as /exportdata/tempdb-new
newfs: construct a new file system /dev/rdsk/emcpower8a: (y/n)? y
Warning: 1152 sector(s) in last cylinder unallocated
/dev/rdsk/emcpower8a: 52425600 sectors in 8533 cylinders of 48 tracks, 128 sectors
25598.4MB in 534 cyl groups (16 c/g, 48.00MB/g, 5824 i/g)
super-block backups (for fsck -F ufs -o b=#) at:
32, 98464, 196896, 295328, 393760, 492192, 590624, 689056, 787488, 885920,
Initializing cylinder groups:
……….
super-block backups for last 10 cylinder groups at:
51512864, 51611296, 51709728, 51808160, 51906592, 52005024, 52103456,
52201888, 52300320, 52398752

Mount new disks to the mount points that we created earlier, and check the sizes are matching

gurkulSolaris:root# mount /dev/dsk/emcpower8a /exportdata/tempdb-new
gurkulSolaris:root# mount /dev/dsk/emcpower14a /exportdata/tempdb1-new
gurkulSolaris:root# mount /dev/dsk/emcpower11a /localfs/SYBDB_dumps_1-new
gurkulSolaris:root# mount /dev/dsk/emcpower7a /localfs/app/SYBDB-new

gurkulSolaris:root# df -h | egrep -i “localfs|export”
Filesystem size used avail capacity Mounted on
/dev/md/dsk/d27 24G 22G 1.8G 93% /exportdata/tempdb1
/dev/md/dsk/d29 19G 16G 3.2G 84% /exportdata/tempdb
/dev/md/dsk/d28 33G 1.1G 32G 4% /localfs/SYBDB_dumps_1
/dev/md/dsk/d25 2.9G 1.7G 1.1G 62% /localfs/app/SYBDB
/dev/dsk/emcpower8a 25G 25M 24G 1% /exportdata/tempdb-new
/dev/dsk/emcpower14a 20G 20M 19G 1% /exportdata/tempdb1-new
/dev/dsk/emcpower7a 4.9G 5.0M 4.9G 1% /localfs/app/SYBDB-new
/dev/dsk/emcpower11a 35G 36M 35G 1% /localfs/SYBDB_dumps_1-new

Create new meta devices from the new VMAX disks

gurkulSolaris:root# metainit d51 1 1 /dev/dsk/emcpower8a
d51: Concat/Stripe is setup
gurkulSolaris:root# metainit d61 1 1 /dev/dsk/emcpower14a
d61: Concat/Stripe is setup
gurkulSolaris:root# metainit d71 1 1 /dev/dsk/emcpower7a
d71: Concat/Stripe is setup
gurkulSolaris:root# metainit d81 1 1 /dev/dsk/emcpower11a
d81: Concat/Stripe is setup

Create one mirrors from the above metadevices

gurkulSolaris:root# metainit d50 -m d51
d50: Mirror is setup
gurkulSolaris:root# metainit d60 -m d61
d60: Mirror is setup
gurkulSolaris:root# metainit d70 -m d71
d70: Mirror is setup
gurkulSolaris:root# metainit d80 -m d81
d80: Mirror is setup

Remount the new filesystems from the metadevices instead of emc devices

gurkulSolaris:root# umount /exportdata/tempdb-new
gurkulSolaris:root# umount /exportdata/tempdb1-new
gurkulSolaris:root# umount /localfs/app/SYBDB-new
gurkulSolaris:root# umount /localfs/SYBDB_dumps_1-new
gurkulSolaris:root# mount /dev/md/dsk/d50 /exportdata/tempdb-new
gurkulSolaris:root# mount /dev/md/dsk/d60 /exportdata/tempdb1-new
gurkulSolaris:root# mount /dev/md/dsk/d70 /localfs/app/SYBDB-new
gurkulSolaris:root# mount /dev/md/dsk/d80 /localfs/SYBDB_dumps_1-new

Verify the mounts with df command

gurkulSolaris:root# df -hl|grep md
/dev/md/dsk/d27 24G 22G 1.8G 93% /exportdata/tempdb1
/dev/md/dsk/d29 19G 16G 3.2G 84% /exportdata/tempdb
/dev/md/dsk/d28 33G 1.1G 32G 4% /localfs/SYBDB_dumps_1
/dev/md/dsk/d25 2.9G 1.7G 1.1G 62% /localfs/app/SYBDB
/dev/md/dsk/d50 25G 25M 24G 1% /exportdata/tempdb-new
/dev/md/dsk/d60 20G 20M 19G 1% /exportdata/tempdb1-new
/dev/md/dsk/d70 4.9G 5.0M 4.9G 1% /localfs/app/SYBDB-new
/dev/md/dsk/d80 35G 36M 35G 1% /localfs/SYBDB_dumps_1-new

Copying data from old filesystems to new filesystems

gurkulSolaris:root# cd /exportdata/tempdb1
gurkulSolaris:root# tar cf – . | (cd /exportdata/tempdb1-new; tar xfp -)
gurkulSolaris:root# cd /exportdata/tempdb
gurkulSolaris:root# tar cf – . | (cd /exportdata/tempdb-new; tar xfp -)

gurkulSolaris:root# cd /localfs/app/SYBDB
gurkulSolaris:root# tar cf – . | (cd /localfs/app/SYBDB-new; tar xfp -)

gurkulSolaris:root# cd /localfs/SYBDB_dumps_1
gurkulSolaris:root# tar cf – . | (cd /localfs/SYBDB_dumps_1-new; tar xfp -)

Once data copy completed, we can unmoount the old filesystems and mount the new filesystem on old mounts as given below

gurkulSolaris:root# umount /exportdata/tempdb-new 
gurkulSolaris:root# umount /exportdata/tempdb1-new 
gurkulSolaris:root# umount /localfs/app/SYBDB-new 
gurkulSolaris:root# umount /localfs/SYBDB_dumps_1-new 

gurkulSolaris:root# umount /exportdata/tempdb
gurkulSolaris:root# umount /exportdata/tempdb1
gurkulSolaris:root# umount /localfs/app/SYBDB
gurkulSolaris:root# umount /localfs/SYBDB_dumps_1

gurkulSolaris:root# mount /dev/md/dsk/d50 /exportdata/tempdb
gurkulSolaris:root# mount /dev/md/dsk/d60 /exportdata/tempdb1
gurkulSolaris:root# mount /dev/md/dsk/d70 /localfs/app/SYBDB
gurkulSolaris:root# mount /dev/md/dsk/d80 /localfs/SYBDB_dumps_1

 

And modify the /etc/vfstab to mount the new file systems during the boot, as below

gurkulindia-Sybase:root# cat /etc/vfstab
#device device mount FS fsck mount mount
#to mount to fsck point type pass at boot options
#

:::: file truncated for other entries :::

/dev/md/dsk/d70 /dev/md/rdsk/d70 /localfs/app/SYBDB ufs 2 yes logging
/dev/md/dsk/d60 /dev/md/rdsk/d60 /exportdata/tempdb1 ufs 2 yes –
/dev/md/dsk/d80 /dev/md/rdsk/d80 /localfs/SYBDB_dumps_1 ufs 2 yes –
/dev/md/dsk/d50 /dev/md/rdsk/d50 /exportdata/tempdb ufs 2 yes logging

 

Usaullay Database team will request for some RAW devices from the new storage for the Sybase purpose. We should create them  as shown below

Actual DBA Requirement for RAW devices 

Device_Name Size Disk Name
SYB_raw1( for dBA use) 30GB empcpower9a
SYB_raw2( for dBA use) 20GB emcpower10a

Format the disks emcpower9a and emcpower10a, and create a single slice with the required sizes. And then Change the ownership to sybase:sybase for the Sybase Raw device as shown below

# chown sybase:sybase /devices/pseudo/emcp@9:a,raw
# chown sybase:sybase /devices/pseudo/emcp@10:a,raw

# ls -l /devices/pseudo/emc* | egrep “@9|@10″|grep sybase
crw——- 1 sybase sybase 314, 81 Sep 10 14:40 /devices/pseudo/emcp@10:b,raw
crw——- 1 sybase sybase 314, 83 Sep 10 14:40 /devices/pseudo/emcp@10:d,raw

 

Once we have the raw devices ready, we can inform to the DBA team that we have both  file-systems and raw devices ready from the new storage.  The Data replication from  old raw devices to new raw devices has to be done by sybase team.  Once they all done, we will leave the server for a day to observe for any unexpected issues, If found any we can roll back to the old devices by mounting them back to original mounts.  If no errors found we can cleanup the old storage to let the storage team to reclaim the Clarion Disks.

I will be posting the cleanup procedure in separate  post, just stay connected. Mean while you are most welcome to drop your comments and feedback on this post.

 

 

Ramdev

Ramdev

I have started unixadminschool.com ( aka gurkulindia.com) in 2009 as my own personal reference blog, and later sometime i have realized that my leanings might be helpful for other unixadmins if I manage my knowledge-base in more user friendly format. And the result is today's' unixadminschool.com. You can connect me at - https://www.linkedin.com/in/unixadminschool/

34 Responses

  1. Mahesh R M says:

    Hi,

    Great notes…………………

  2. Robert says:

    Check out “EMC Open Replicator”.

  3. Ravi Ravi says:

    @Robert: we know there are storage array based migration tools and OR is one of the best, but this article discusses about the Host based migration. I think this is one of the well organized article on the web. Please read the 1st paragraph of this article.

  4. Harsha says:

    I certainly recommend EMC open migrator for tasks like these- its a host-based migration tool (unlike open replicator). It lets you:

    1. mirror from old set of LUNs to the new without I/O downtime (the only downtime is when you sync the delta).

    2. You can throttle the I/O during the mirroring process – which you can’t via SVM or Tar – this is important if you are copying large data (say > TB) and don’t want to impact the system during business hours.

    3. You can control (pause/resume/restart) multiple mirroring sessions.

    4. Lets you “compare” the data and it will carry over all the permissions as-is.

  5. Nuno Pacheco says:

    Sometimes, activating “storage software” can be expensive… 
    With SVM you can just set all metadevices as two way mirrors. Normally they should be a one way mirror, then, when in migration times, add the new sub-mirror (new storage array), wait for “sync” and then remove the old sub-mirror (old storage array).

  6. Ajay says:

    What about mirroring with filesystems are created with SVM soft partitions??? I was trying to mirror and break the mirror for copying data but I guess it has some limitations as it is a not having a physical allocated device.

    • Ramdev Ramdev says:

      Hi Ajay, Yes, the SVM mirroring with soft partitions should have an extra step after you break the mirror
      i.e. copying the exntent headers ( used by soft partitions ) to the new disk slice contains soft partitions, using metarecover

      metarecover -p -m
      where cxtxdxsx — should be the slice ( with soft partitions ) on the detached disk.

  7. Vishal Shroff says:

    Ramdev I have seen lots of ur post on Linkedin also.
    Excellent knowledge sharing stuff u r doing to all SA’s.

  8. chetantare09 says:

    Dear Sir,

    We are performing jumpstart on V890 server bot we are getting following error

    {7} ok boot /pci@8,700000/pci@5/pci@0/network@0,0 – install
    Boot device: /pci@8,700000/pci@5/pci@0/network@0,0 File and args: – install
    /pci@8,700000/pci@5/pci@0/network@0: 1000 Mbps full duplex link up
    Requesting Internet Address for 0:14:4f:24:f4:10
    Requesting Internet Address for 0:14:4f:24:f4:10
    /pci@8,700000/pci@5/pci@0/network@0: 1000 Mbps full duplex link up
    ramdisk-root ufs-file-system
    Loading: /platform/SUNW,Sun-Fire-V890/kernel/sparcv9/unix
    Loading: /platform/sun4u/kernel/sparcv9/unix
    SunOS Release 5.10 Version Generic_147440-01 64-bit
    Copyright (c) 1983, 2011, Oracle and/or its affiliates. All rights reserved.
    os-io Configuring devices.
    Using RPC Bootparams for network configuration information.
    Attempting to configure interface eri0…
    Skipped interface eri0
    Attempting to configure interface ge0…
    SUNW,pci-gem0: Could not Establish Link — Check if the cable is plugged in.
    Skipped interface ge0
    Attempting to configure interface ce3…
    SUNW,pci-gem0: Could not Establish Link — Check if the cable is plugged in.
    Skipped interface ce3
    Attempting to configure interface ce2…
    Skipped interface ce2
    Attempting to configure interface ce1…
    Skipped interface ce1
    Attempting to configure interface ce0…
    Configured interface ce0
    |

    ~>/

    rsc>
    rsc>
    rsc>
    rsc>
    rsc> break

    it’s canot going ahead after “configure interface ce0”

    # dladm show-dev
    ce0 link: up speed: 1000 Mbps duplex: full
    ce1 link: down speed: 0 Mbps duplex: unknown
    ce2 link: down speed: 0 Mbps duplex: unknown
    ce3 link: down speed: 0 Mbps duplex: unknown
    ge0 link: down speed: 0 Mbps duplex: unknown
    eri0 link: down speed: 0 Mbps duplex: unknown

    # dladm show-link
    ce0 type: legacy mtu: 1500 device: ce0
    ce1 type: legacy mtu: 1500 device: ce1
    ce2 type: legacy mtu: 1500 device: ce2
    ce3 type: legacy mtu: 1500 device: ce3
    ge0 type: legacy mtu: 1500 device: ge0
    eri0 type: legacy mtu: 1500 device: eri0

    pls help me to sloved this i unable to understand what is promb?????

  9. Nikhil09 says:

    Hi,
    I am new buddy in SOLARIS… i read book & search in Google also i didn’t get my satisfy answer …….

    In ifconfig -a cmd we recv following o/p

    lo0: flags=2001000849 mtu 8232 index 1
    inet 127.0.0.1 netmask ff000000
    ce0: flags=1000843 mtu 1500 index 2
    inet 10.35.54.21 netmask ffffff00 broadcast 10.35.54.255
    ether 0:94:4f:4a:f7:27

    whats is broadcast address, from which file it will received that & what is roll of that in solaris n/w part???
    also in “/etc/netmasks” file we mentioned server IP range & netmasks value
    whats is role of netmasks ????when mentioned 255.255.255.0 means what happed in background…

    If u see ifconfig -a o/p very carefully every NIC card there is flags is there & after that one number is present “2001000849” & “1000843” whats is indicating this number…

    When OS get install, by default one address is already present that is “lo0”.
    To get access of server we req. IP address & for IP we req. NIC card, so lo0 is SUN H/W by default NIC card??? in that description it’s write the LOOPBACK whats is that “LOOPBACK” whats is exactly task in SOLARIS. & in situation we use this loopback address “lo0” …..

    I hope u under stand my que. & u will help me to sloved this BASIC que.

    • Ramdev Ramdev says:

      Nikhil, your questions are not specific to solaris, they are tcp/ip concepts and you need to search for tcp/ip basics to understand this.

      >>> whats is broadcast address, from which file it will received that & what is roll of that in solaris n/w part???
      >>> Every System as IP Address ( just like every mobile had a number), which consists two parts – network identification part ( just like our BSNL stats with 94, airtel with 99 …etc) + host identification part ( just like our last digits of mobile number).

      in your ip 10.25.54.21 — 10.25.54 will be treated as network address , where as .21 treated as host address. Just in case if you want to send some information to all the hosts in your network i.e. 10.25.54, instead of sending them from each host you just send it to 10.25.54.255 so that it will be received by all the hosts in your network. And this special address is called broadcast address.

      >>> also in “/etc/netmasks” file we mentioned server IP range & netmasks value
      >>> whats is role of netmasks ????when mentioned 255.255.255.0 means what happed in background…

      TCP/IP actually classifies it’s network into multiple classes like A,B,C,D(multicast). And subnetmasks are used to differentiate one class of IP to another IP. So that whenever you broadcast something it won’t go outside of that network range. Please refer this simple tutorial to understand more about this http://youtu.be/QoQmv2VNuX0

      >>> If u see ifconfig -a o/p very carefully every NIC card there is flags is there & after that one number is present “2001000849″ & “1000843″ whats is indicating this number…
      each number in the flag has a meaning, and you can check that from below list.

      0x00000001 /* interface is up */
      0x00000002 /* broadcast address valid */
      0x00000004 /* turn on debugging */
      0x00000008 /* is a loopback net */
      0x00000010 /* interface is point-to-point link */
      0x00000020 /* avoid use of trailers */
      0x00000040 /* resources allocated */
      0x00000080 /* no address resolution protocol */
      0x00000100 /* receive all packets */
      0x00000200 /* receive all multicast packets */
      0x00000400 /* protocol code on board */
      0x00000800 /* supports multicast */
      0x00001000 /* multicast using broadcast address */
      0x00002000 /* non-unique address */
      0x00004000 /* DHCP controls this interface */
      0x00008000 /* do not advertise */
      0x00010000 /* Do not transmit packets */
      0x00020000 /* No address – just on-link subnet */
      0x00040000 /* interface address deprecated */
      0x00080000 /* address from stateless addrconf */
      0x00100000 /* router on this interface */
      0x00200000 /* No NUD on this interface */
      0x00400000 /* Anycast address */
      0x00800000 /* Do not exchange routing info */
      0x01000000 /* IPv4 interface */
      0x02000000 /* IPv6 interface */
      0x04000000 /* Mobile IP controls this interface */
      0x08000000 /* Don’t failover on NIC failure */
      0x10000000 /* NIC has failed */
      0x20000000 /* Standby NIC to be used on failures */
      0x40000000 /* Standby active or not ? */
      0x80000000 /* NIC has been offlined */

      example: if the flags value is 1000849, that means the interface card is:
      0x00000008 + 0x00000001 = is a loopback and the card is up
      0x00000040 = resources are allocated
      0x00000800 = card supports multicasting

      >>> When OS get install, by default one address is already present that is “lo0″.
      >>> To get access of server we req. IP address & for IP we req. NIC card, so lo0 is SUN H/W by default NIC card??? in that description it’s write the LOOPBACK whats is that “LOOPBACK” whats is exactly task in SOLARIS. & in situation we use this loopback address “lo0″ …..

      lo0 is a virtual interface used for tcp/ip own internal testing by assigning it to loopback address i.e. 127.0.0.1 . And this interface not connected to any external network, so you cannot use to connect to the server.

  10. Nikhil09 says:

    Hi sir,

    Many many thanks 4 in depth explanation. now i have clear idea about this.
    But yesterday we add to one IP addresss in /etc/hosts

    10.25.54.34 TEST3 TEST3..com localhost.

    whats is diff bet loghosts & localhost
    In which situation we will loghosts & localhost?????

    Again thnks 4 explanation….

  11. pawan04 says:

    Hi,

    I have perform IPMP on server. but its showing following errors,

    root@DEVTEST10 # dladm show-dev
    ce0 link: up speed: 100 Mbps duplex: full
    ce1 link: up speed: 100 Mbps duplex: full
    root@DEVTEST10 #

    root@DEVTEST10 # ifconfig -a
    lo0: flags=2001000849 mtu 8232 index 1
    inet 127.0.0.1 netmask ff000000
    ce0: flags=9040843 mt u 1500 index 4
    inet 10.45.23.21 netmask ffffff00 broadcast 10.45.23.255
    groupname accgrp
    ether 0:14:4f:4a:c7:37
    ce0:1: flags=1000843 mtu 1500 index 4
    inet 10.45.23.20 netmask ffffff00 broadcast 10.45.23.255
    ce1: flags=9040843 mt u 1500 index 3
    inet 10.45.23.22 netmask ffffff00 broadcast 10.45.23.255
    groupname accgrp
    ether 0:14:4f:4a:c7:36

    cat /etc/hosts
    10.45.23.20 DEVTEST10 DEVTEST10..com loghost
    10.45.23.21 DEVTEST10-ce0
    10.45.23.22 DEVTEST10-ce1

    root@DEVTEST10 # cat /etc/hostname.ce0
    DEVTEST10-ce0 netmask + broadcast + deprecated -failover group accgrp addif DEVTEST10 netmask + broadcast + up
    root@DEVTEST10 # cat /etc/hostname.ce1
    DEVTEST10-ce1 netmask + broadcast + deprecated -failover group accgrp
    root@DEVTEST10 #
    root@DEVTEST10 # if_mpadm -d ce0
    connect: Connection refused
    bind: Cannot assign requested address
    Cannot establish communication with in.mpathd.

    I does not get what wrong i have done that why it’s not done….

    One more thing in /etc/hostname.ce0 file if gave “+failover” same in /etc/hostname.ce1 file also thn whats happended????

  12. mohammed mukram says:

    Hi Ramdev. Nice Website, which has effective articles about Solaris.. Can u post a realtime implemented document for STICKY BIT operation. where I want to create a file for the user and provision sticky bit rules to any one of the file . 1. Create a user , 2. create a file by the same user, 3. give Sticky bit modification to that file. 4. delete the file by the same user .

  13. Siva says:

    Hi Ram,

    We have a plan to migrate our old SAN to new one in our Solaris platform with Zones. Please share are the prechecks need to be done before performing migration.

    Regards,
    Siva

  14. Ramdev Ramdev says:

    Hi Siva,  I will check and share if I have something useful for you

  15. ravi says:

    hi ram
    cd /var   ——> i am getting permissions denied
    and  also cd / —-> same
    what is the problem   

  16. sanjeeva says:

    Hi Ram,
    i have sys1 and sys2 servers ,i want to migrate the existing zone from sys1 to sys2, i installed zone in sys1 naming as myzone by using /myzone (zonepath) directory.Now i halted the zone and detached the zone and took back of zonepath by using tar command and transported this backedup (zonepath.tar) from sys1 to sys2 by using http://ftp.In sys2 server by using this directory how can i use this backedup directory and can attach the  zone.

    • Ramdev Ramdev says:

      Hi Sanjeeva, Apologies for delayed reply, i am in vacation for past one week and haven’d had chance to look at the questions.

      you just need to extract the backup to new location, and then configure the new zone sys2 using the zonepath pointing to the extracted directory. and then attach the zone to the server. Once attached

  17. Zaina Ariffin says:

    Hi Ramdev,

    Do you have steps as to how to do sancopy on a solaris 10 machine from a CX4-120 to VNX5500..I am doing migration. Thanks

  18. Ramdev Ramdev says:

    Hi Zaina, Sorry, I don’t have any ready docs for that configuration.

  19. Zainal Ariffin says:

    Hi Ramdev,
    What I am trying to say if data migration is done by san copy, do i need to do anything on the solaris 10 os itself like the steps you do above.

  20. Ramdev Ramdev says:

    Hi Zainal, The procedure from os part depends on  how you are currently using the disks. for example if you are using the disks under vxvm you will unmount, and deport the diskgroup  before hte san copy. later when you get new luns you will reddetec luns, and the volumes will be imported back to solaris.

    btw, please let me know how are you using the current san devices on your current machine, so that I can give you some outline of instructions.

  21. Zaina Ariffin says:

    Hi Ramdev, I am using SVM.

    • Ramdev Ramdev says:

      Hi Zaina, SVM is little bit risky. I am afraid my instructions might not work for you for 100% becuause it has so much limitations on the hardware part. It is all about you keep the meta device configuration safe on the server and reinitialize same configuration on the new device. But that will be possible only if the new device and old device having same geometry otherwise it might not work the way you expect and breaks the volume.

      the problem with SVM is the Volume information doesn’t storage on the disks unlike VxVM but they store in the machine.

      If the SAN level copy happening same kind of SAN devices then the below procedure can work:
      Just keep the copies safe before migration:
      A current or accurate copy of metastat -p; metatadb -i output
      A current or accurate copy of the md.cf file
      An up-to-date md.tab file

      commentout the the /etc/vfstab for SAN volumes for not load until the migraiont, unmount the filesystems. and go for san copy.

      And once the new SAN device detected, makesure the device detected as same device path as earlier, if not remember the new device . And initiate the metadevice with exactly same size as earlier metadevice.

  22. Zaina Ariffin says:

    Thanks Ramdev. I have to initiate new metadevice as it generated new PP devices. I did a host-based copy after that.
    Perhaps you can create a new technical guide on the Solaris for VxVM, ZFS and UFS when there’s data migration from one EMC storage to another. That will be useful for system admin out there. Thanks again.

  23. Ramdev Ramdev says:

    Hi Zaina,  Thanks for updating me the status.

  1. September 16, 2015

    […] Read – Solaris host level SAN migration from Clariion to VMAX – Hands on Lab […]

  2. September 18, 2015

    […] Read –  Solaris host level SAN migration from Clariion to VMAX […]

  3. September 22, 2015

    […] Read – Solaris host level SAN migration from Clariion to VMAX […]

What is in your mind, about this post ? Leave a Reply

Close
  Our next learning article is ready, subscribe it in your email

What is your Learning Goal for Next Six Months ? Talk to us