SAN Storage Migration – Solaris with VxVM

This is a following post for my previous SAN migration post  Solaris host level SAN migration (with SVM) from Clariion to VMAX.   As discussed earlier in my previous post,  many organisations started migrating their Server storage from Old legacy SAN devices ( e.g. EMC Clariion )  to new  powerful SAN storage ( e.g. EMC Symetrix VMAX) because of the low performance and maintenance costs involved with legacy storage.  

                           Depending on the budget allocated for the Storage Migration projects, some organisations  prefer the migration by “direct storage level data replication  using expensive migration tools”,  while the other companies ( who are with limited budget) prefer to do the migrations by host level data replication. In the later method,  the success rate of the migration project directly depends on the skill level and expertise of the  unix administrator who is implementing the migration project.  I believe this hands-on post will give some idea for the Solaris admins whoever responsible for the Storage migrations. Before going to this post you might want to refer this post for the pre-planning tasks. 

Platform Environment used for this Post

 

Server List

gurkulsol10

OS Build

Solaris 10

VolumeManager(SVM/SDS/VXVM)

VxVM 5.0

HA Software ( VCS / SUN Cluster)

NA

Server Model

Sun V240

 

Storage Information

 

Server List

gurkulsol10

Volume Management

Veritas

MultiPath

EMC Power Path

HBA Card Firmware ( ***)

Emulex LP9002L 3.92a3

Emulex  LP9002L 1.41a4

Volume Information

( DiskGroup – Volume – Size )

Sybdg – backup –  60G

Sybdg – home – 60G

Total Luns 

Total Size of LUNs

60G ( shared )

 

 

*** Emulex HBA Firmware version in Solaris 10 :

 

 # /opt/EMLXemlxu/bin/emlxadm 

-> Select HBA 

->  run “get_fw_rev”

-> run “get_fcode_rev”

 

Phase 1 –  Migration Check-list for UNIX Tasks

 

S.No.

Unix Task

Completion Status

1

Unix Pre-checks ( collecting data)

 

2

Confirm Application shutdown

 

3

Reboot the Server for Reconfigure boot

 

4

Check New LUN/Multipath/ Disks

 

5

Mirror existing Volumes with new storage

 

7

Final Reboot

 

8

Server Unix Checkout

 

 

Phase 2 – Clearing Old Storage 

 

S.No

Unix Task

Completion Status

1

Detach the Old Storage Plexes  from volumes

 

2

Detach Disks from Disk Groups

 

3

Clean up old LUN entries

 

4

Perform Reconfiguration Reboot

 

5

Unix Checkout

 

                                                                          

 

 

 

 Phase 1 : Unix Tasks For Migration:

 

Step 1. Unix Pre Checks

 

mkdir –p /var/tmp/st-migration
echo|format                      > /var/tmp/st-migration/format.old
vxdisk -o alldgs list            > /var/tmp/st-migration/vxdisk-oalldgs-list.old
vxprint -ht                      > /var/tmp/st-migration/vxprint-ht.old
powermt display dev=all         > /var/tmp/st-migration/powermt-display-dev-all.old
fcinfo hba-port                 > /var/tmp/st-migration/fcinfo-hba-port.old
df -hl                          > /var/tmp/st-migration/df-hl.old
svcs –x                         > /var/tmp/st-migration/svcs-x.old
svcs –a                         > /var/tmp/st-migration/svcs-a.old
netstat –rn                     > /var/tmp/st-migraiton/route-table.old
powermt save file=/var/tmp/st-migration/powermt.backup

 

 

 

Step 2.     Confirm all applications are shutdown :

 

a.  # ps –ef|grep –v root
See any specific application related processes. If found any, discuss in the chat channel, for the  advice to kill them.
b. Add the server to monitoring to avoid alerts during maintenance.

 

 Step 3.   Perform Reconfiguration Reboot

 

 

For Solaris 10 :
# devfsadm;
Or
Just incase if the storage was allocated from completely new storage, you might need reconfiguration reboot.
# sync; sync; reboot — -r
 
For Solaris 8:

Configure /kernel/drv/sd.conf ( with necessary target entries) and /kernel/drv/lpfc.conf ( with
       new WWPN numbers)
# sync sync; reboot — -r
 
For Linux:
# reboot

 

 

 

Step 4.     Detecting New LUNs  and check Multipath

 

For Solaris 10 :

Check the multipathing and all paths should be active

# powermt display dev=all

#powercf -q
#powermt config
#powermt save

# powermt display dev=all > powermt-display-dev-all.new

 

à Check for the new paths added to the list and the status
#sdiff –s powermt-display-dev-all.new powermt-display-dev-all.old

Check the Disks Devices added to the System and Compare with old data

Note : New disks should be visible  and count the no. of disks visible :  ( total no. of new entries  =  total no.of new disks *  total no. of paths )

# echo|format > /var/tmp/format.new
# sdiff –s  /var/tmp/format.old /var/tmp/format.new

 

Step 5. Add new disks as mirror disks to the existing volumes

 

5.a. Add new disks to veritas and Veritas DG with the name  as per the standard

Example:

Check the current disk names from “vxdisk –o alldgs list “ output, and follow the same naming convention for the new disks

Current Disks:

emcpower0s2  auto:cdsdisk    CLAR_0041  sybdg      online

emcpower1s2  auto:cdsdisk    CLAR_0052  sybdg      online

Adding new disks:

#/etc/vx/bin/vxdisksetup –I

  #vxdg –g adddisk  <new_diskname>=

 

5.b.    Map new-disk to old-disk based on  size.

 

 

# echo|format > /var/tmp/format.new

Check the new disk entries from the format output and match each one to old one based on size

# sdiff  /var/tmp/format.old /var/tmp/format.new

 

 

5.c.    Add new disks as mirror components for the existing concat volumes

 

Repeat the below steps for all the volumes in all the imported disk groups
 
# vxassist –g <dgname> mirror <volume name> <new-disk-of-same-size >
 
Check the Sync status
 
# vxtask list
 

Check the status of Volumes – Sync should be completed

 

Confirm all volumes are in Active state

#vxprint –hv

 

Disocciate OLD plexes from the mirror

#vxplex –g dis <old_plex_name>

 

Rename OLD_Plex

#vxedit –g rename <old_plex_name> <old_plex_name>-OLD-CX

 

 

 Step 6.  Final Reboot

 

   # sync ; sync ;  init 6

 

Step 7. Unix Checkout

 # vxprint –hv | grep “^v”  

 –>  all volumes should be active

 # df –hl > /var/tmp/df-hl.new

# sdiff –s /var/tmp/df-hl.new /var/tmp/df-hl.old

–> all filesystems should be mounted as before reboot

# svcs –a > /var/tmp/st-migration/svcs-a.new

# sdiff –s /var/tmp/st-migration/svcs-a.old /var/tmp/st-migration/svcs-a.new

–> all services should be started as before the reboot, incase of  mismatch check the “svcs –xv”, if  any  errors and troubleshoot the issue

# netstat –rn > /var/tmp/st-migraiton/route-table.new

# sdiff –s /var/tmp/st-migraiton/route-table  /var/tmp/st-migraiton/route-table.new

–> check for missing routes and add them if any routes missing

 

 

 

Phase 2 : Clear Storage  – Post Migration

(It is recommended to not to remove the old luns  for one or two weeks after the migration, Unless you have highest priority task to reuse the unallocated luns)

 

Step 1 : Detach Plexes ( related to OLD Storage) from the all volumes configured with old storage

 # vxedit –g  -rf  rm <old-plex_name>

Step 2 : Remove the Disks from Disk Groups, and VxVM

     

# vxdg –g rmdisk

  # vxdisk rm

Check all volumes are Active state

 # vxprint –hv

 

At this level you can ask Storage Team to Unconfigure the LUNS attached for this host, and then proceed to the next step to cleanup links to the old devices

Step 3 : Cleanup Old LUN entries from the system and perform Reconfigure Reboot

 

For Solaris 10:

          # devfsadm -C

            –> it will cleanup unavailable  links

 

            # sync; sync; reboot – r

            –>  reconfiguration reboot will clean up all the old entries

For Solaris 8:

             Edit /kernel/drv/sd.conf  – to remove the target entried related to old storage luns

            Edit /kernel/drv/lpfc.conf – to remove old  WWPN Lun entries.

            # touch /reconfigure; sync; sync; init 6

 

Step 4 : Unix Final checkout

# vxprint –hv | grep “^v”  

–> all volumes should be active

# df –hl > /var/tmp/df-hl.new

# sdiff –s /var/tmp/df-hl.new /var/tmp/df-hl.old

–> all filesystems should be mounted as before reboot

 # svcs –a > /var/tmp/st-migration/svcs-a.new

# sdiff –s /var/tmp/st-migration/svcs-a.old /var/tmp/st-migration/svcs-a.new

–> all services should be started as before the reboot, incase of  mismatch check the “svcs –xv”,   if any errors and troubleshoot the issue

 # netstat –rn > /var/tmp/st-migraiton/route-table.new

# sdiff –s /var/tmp/st-migraiton/route-table  /var/tmp/st-migraiton/route-table.new

–> check for missing routes and add them if any routes missing

 

 

Additional notes on Power path:

 

powermt check – Check the I/O Paths

If you have made changes to the HBA’s, or I/O paths, just execute powermt check, to take appropriate action. For example, if you have manually removed an I/O path, check command will detect a dead path and remove it from the EMC path list.

# powermt check

Warning: storage_system I/O path path_name is dead.

       Do you want to remove it (y/n/a/q)?

Note: If you want powermt to automatically remove all dead paths, without any confirmation, execute “powermt check force”.

 

powermt config – Configure PowerPath

This command checks for available EMC SAN logical devices and add those to PowerPath configuration list. Powermt config command, sets some of the options to it’s default values. For example, write throttling = off, HBA mode = active, CLARiiON policy = CLAROpt, etc.

Possible EMC SAN LUN policy values are: Adaptive, BasicFailover, CLAROpt, LeastBlocks, LeastIos, NoRedirect, Request, RoundRobin, StreamIO, or SymmOpt.

After you execute the powermt config, if you don’t like any of the default values, you should change it accordingly.

# powermt config

 

powermt restore – Make Dead I/O Path Alive

If you have dead I/O paths, and if you’ve done something to fix the issue, you can request PowerPath to re-check the paths and mark it as active using powermt restore command.

When you execute powermt restore, it does an I/O path check. If a previously dead path is alive, it will be marked as alive, and if a previously alive path is dead, it will be marked as dead.

For some reason, if you see the default owner and the current owner of a particular LUN is not the same storage processor, then execute the following command, which will make the current owner of the LUN same as the default owner.

# powermt restore dev=all

Instead of dev, you can also specify class in the powermt restore command. Class can be one of the following depending on your system.

  • symm – Symmetrix
  • clariion –  CLARiiON
  • invista – Invista
  • ess – IBM ESS
  • hitachi – Hitachi Lightning TagmaStore
  • hpxp –  HP StorageWorks XP, or EVA series
  • hphsx – HP StorageWorks EMA, or MA
  • all – All systems

 

 

How to Stay Connected to Us ?

You can simply subscribe for our free email posts from here

 

You can always stay close to us by connecting in  Facebook,  LinkedIn , twitter and Google + social networks.   And We have very active Facebook’s just-UNIX-no-noise group and   Linked in  Enterprise UNIX administration group, for active discussions.

 

We always love to hear your comments and feedback. 

 

 

Ramdev

Ramdev

I have started unixadminschool.com ( aka gurkulindia.com) in 2009 as my own personal reference blog, and later sometime i have realized that my leanings might be helpful for other unixadmins if I manage my knowledge-base in more user friendly format. And the result is today's' unixadminschool.com. You can connect me at - https://www.linkedin.com/in/unixadminschool/

16 Responses

  1. Yogesh Raheja says:

    @Ram, perfect for Solaris 10.

  2. sekhar says:

    perfeclty good…

  3. pradeep Marneni says:

    Perfect RAM i Almost forgot veritas you brushed my mind thanks for your efforts

  4. Ramdev Ramdev says:

    Yeah I know pradeep. That happens when the relationship status changes from “single to engaged ” :)

  5. dheerendra says:

    @Ram…very useful step ….good one…thanks

  6. parag says:

    Hi,

    What is diif between cfgadm & luxadm cmds.

    in which scenario we will user this 2 cmds….

  7. sathiyaseelan says:

    Hi,

    There are two volumes, i have to migrate new storage.
    i am going to use  this commend
    vxassist -g mirror

    my doubt is disk 11 is using two volume so I am going to use like this what will happen.
    Kindly advice me.

    regards,
    Sathiya,

    v  db2ddv01     –            ENABLED  ACTIVE   1258291200 SELECT  –        fsgen
    pl db2ddv01-01  db2ddv01     ENABLED  ACTIVE   1258291200 CONCAT  –        RW
    sd DISK6-01     db2ddv01-01  DISK6    0        209715200 0        xiv0_282d ENA
    sd DISK11-02    db2ddv01-01  DISK11   1048576000 1048576000 209715200 xiv0_284c ENA

    v  db2ddv04     –            ENABLED  ACTIVE   1048576000 SELECT  –        fsgen
    pl db2ddv04-01  db2ddv04     ENABLED  ACTIVE   1048576000 CONCAT  –        RW
    sd DISK11-01    db2ddv04-01  DISK11   0        1048576000 0       xiv0_284c ENA

  8. sathiyaseelan says:

    There are two volumes, i have to migrate new storage.
    i am going to use  this commend
    vxassist -g mirror

    v  db2archive   –            ENABLED  ACTIVE   4194304000 SELECT  –        fsgen
    pl db2archive-01 db2archive  ENABLED  ACTIVE   4194304000 CONCAT  –        RW
    sd DISK2-02     db2archive-01 DISK2   209715200 1941113648 0      xiv0_2831 ENA
    sd DISK3-02     db2archive-01 DISK3   209715200 1941113648 1941113648 xiv0_2832 ENA
    sd DISK4-02     db2archive-01 DISK4   209715200 312076704 3882227296 xiv0_2833 ENA

    v  db2ddv07     –            ENABLED  ACTIVE   629145600 SELECT   –        fsgen
    pl db2ddv07-01  db2ddv07     ENABLED  ACTIVE   629145600 CONCAT   –        RW
    sd DISK7-03     db2ddv07-01  DISK7    1366146272 629145600 0      xiv0_282e ENA

    v  db2ddv95     –            ENABLED  ACTIVE   209715200 SELECT   –        fsgen
    pl db2ddv95-01  db2ddv95     ENABLED  ACTIVE   209715200 CONCAT   –        RW
    sd DISK3-01     db2ddv95-01  DISK3    0        209715200 0        xiv0_2832 ENA

    v  db2recovery  –            ENABLED  ACTIVE   4194304000 SELECT  –        fsgen
    pl db2recovery-01 db2recovery ENABLED ACTIVE   4194304000 CONCAT  –        RW
    sd DISK7-02     db2recovery-01 DISK7  629145600 737000672 0       xiv0_282e ENA
    sd DISK1-03     db2recovery-01 DISK1  322562464 1828266384 737000672 xiv0_2830 ENA
    sd DISK4-03     db2recovery-01 DISK4  521791904 1629036944 2565267056 xiv0_2833 ENA

    how to do it

  9. Ramdev Ramdev says:

    HI Sathiya, I think some how the format was corrupt for the syntax ….. the command of vxassist -g mirror changed in the post , I have placed the right syntax there now . please let me know if that still doesn’t clarify your doubt.

  10. Yogesh Raheja says:

    Hi Sathiya, two option will be available for you, if you want to play very very safe use vxassist -g mirror with new storage. Second if you think you are comfortable with VXVM use move option with vxassist as : vxassist -g move volume !. Thrid one which is not recommended and is very dangerous is to use vxevac command and move entire disk contents to new stoarge. We are in a process of posting the veritas migration method with all outputs and commands at our gurkulindia very soon.

  11. Siva says:

    Hi Yogesh/Dev ,

    May i know, why the “vxevac” is very dangerous to perform migration in host level.

    Also, does there is any procedure to migrate the VxVM in storage array level instead of host level.

    Regards,
    Siva

  12. Ramdev Ramdev says:

    Siva, vxevac command moves all the subdisks from the source disk to each disk one by one in sequential order. and just incase if there is any issue with any subdisk relocation it will fail that subdisk migration and also fail the entire command and quits.
                 This will leave your migration task in an unconstitent state.  To get it into normal state, either you manually rollback to original state or proceed for manual migration of remaining volumes after fixing the issue.

  13. Siva says:

    Hi Ramdev,

    Thanks for that, over to the procedure In of your step in phase 1
    ” Mirror existing Volumes with new storage” —> instead of this does there is any technology/procedure to proceed this mirroring in SAN array level and represent the new disks with the existing data under VxVM.

    Regards,
    Siva

  14. Yogesh Raheja says:

    @Siva, Yes there is a method which is usually called COLD MIGRATION, downtime is required in that case and migration is done at storage level Lun by Lun. Usually that is not followed in industry because it may cause many unseen issues with Lun’s FS etc after migration intially. But to your question there is a well defined method from Storage end.

  1. April 8, 2013

    […] click here […]

  2. September 18, 2015

    […] Read – Solaris Storage Configuration with VxVM […]

What is in your mind, about this post ? Leave a Reply

Close
  Our next learning article is ready, subscribe it in your email

What is your Learning Goal for Next Six Months ? Talk to us