ZFS Patching with Zones using LU (Liveupgrade) in Solaris.

ZFS Patching with Zones using LU (Liveupgrade) in Solaris. 

SOLARIS LIVE UPGRADE:

Solaris Live Upgrade software enables the operating system to continue to run while upgrades, patch installations, or routine maintenance operations are performed. System administrators can patch a system image rapidly without impacting the boot environment needed by the Solaris OS to run. Administrators simply create a copy of the active boot environment, make changes to the copy, and reboot to the updated boot environment when appropriate. In the event of a problem, administrators can revert to a previous environment with a simple reboot. The result a simplified way to update systems that minimizes the downtime and risk often associated with patching efforts.

ZFS Patching + Zones with Liveupgrade:

We have performed patching for Solaris OS under ZFS with Zones using Live upgrade successfully without any downtime of the box. We have done this using traditional patching method i.e downloaded patchbundle from Oracle site and placed that in the server.

Point To Remember:

1.)    /opt is kept under / while performing patching. As the BUG is reported while performing UFS to ZFS migration which states that /opt should remain inside / (recommended by SUN). This is a limitation of ZFS and is documented under Oracle Solaris ZFS documentation, we should have /opt under / itself for any ZFS migration otherwise we will see many unexpected results and hence failures during ZFS activities.

2.)    To prevent unsuccessful ABE creation (Reference CR: 7073468), Nothing is mounted under “/zones/<zone-name>-FS/ i.e “not to mountpoint of filesystem configured within the zone as descendent of zonepath mountpoint”. This is a major BUG hitting our testing.

PRE-PATCH WORK: 

1. Sanity Reboot, to ensure server is cleanly booting without issues.

# init 6

Check server outputs:

# df –k

# zfs list

# zpool list

# zpool status

# zoneadm list –cv

# uname –a

# uptime

# ifconfig –a

Note: Kindly run SUN Explorer before proceeding with the activity.

2. Proceed with the creation of Alternate Boot Environment (ABE).

# lustatus

# lucreate -n ZFS-ABE

# lustatus

3. Create snapshot for all of the datasets for all zpools other than rootpool (as we have ABE copy for rootpool).

Here we have zfstestpool so we are taking example of zfstestpool pool only.

zfstestpool (zfs snapshot -r zfstestpool@patching).

# zfs list

# zfs list -t snapshot

# zfs snapshot -r zfstestpool@patching

# zfs list -t snapshot

PATCHING:

We have downloaded the latest recommended patch cluster from Sun Site and tested the patching on ABE while all services are up and running on BE (current booted environment) without any downtime.

1. Downloaded 10_Recommended patch cluster and placed it under /var/tmp, unzipped and applied on ABE. 

# uname –a

SunOS yogesh-test 5.10 Generic_147440-01 sun4v sparc sun4v

# lustatus

Boot Environment           Is       Active Active    Can    Copy

Name                       Complete Now    On Reboot Delete Status

————————– ——– —— ——— —— ———-

BE                                yes      yes    yes       no     –

ZFS-ABE                    yes      no     no        yes    –

# luupgrade -n ZFS-ABE -t -s /var/tmp/10_Recommended/patches/patch_order

# lustatus

# luactivate ZFS-ABE

# lustatus

Boot Environment           Is       Active Active    Can    Copy

Name                       Complete Now    On Reboot Delete Status

————————– ——– —— ——— —— ———-

BE                                yes      yes    no        no     –

ZFS-ABE                    yes      no     yes       no     –

# init 6

2.) Now the server will come up by ZFS-ABE with patched versionJ. Perform health checks on the server.

# uname –a

SunOS yogesh-test 5.10 Generic_147440-15 sun4v sparc sun4v

# lustatus

Boot Environment           Is       Active Active    Can    Copy

Name                       Complete Now    On Reboot Delete Status

————————– ——– —— ——— —— ———-

BE                                yes      no     no        yes    –

ZFS-ABE                    yes      yes    yes       no     –

# zoneadm list –cv

  ID NAME             STATUS     PATH                           BRAND    IP

   0 global           running    /                              native   shared

   1 zonetest1       running    /zones/zonetest1              native   shared

   2 zonetest2       running    /zones/zonetest2              native   shared

   3 zonetest3       running    /zones/zonetest3              native   shared

# svcs –xv

# zfs list

# zpool list; zpool status

# uptime

BACKOUT PLAN:

The backout plan will be very simple as we just have to activate the older booted environment and have to perform server reboot. Server will come up with the older environment i.e BE. 

1.) Activate previous booted BE and reboot the server.

# lustatus

Boot Environment           Is       Active Active    Can    Copy

Name                       Complete Now    On Reboot Delete Status

————————– ——– —— ——— —— ———-

BE                                yes      no     no        yes    –

ZFS-ABE                    yes      yes    yes       no     –

# luactivate BE

# lustatus

Boot Environment           Is       Active Active    Can    Copy

Name                       Complete Now    On Reboot Delete Status

————————– ——– —— ——— —— ———-

BE                                yes      no     yes       no     –

ZFS-ABE                    yes      yes    no        no     –

# init 6  ***

2.) Server will come up with the older environment and with complete previous configuration.

# lustatus

Boot Environment           Is       Active Active    Can    Copy

Name                       Complete Now    On Reboot Delete Status

————————– ——– —— ——— —— ———-

BE                                yes      yes    yes       no     –

ZFS-ABE                    yes      no     no        yes    –

# uname –a

SunOS yogesh-test 5.10 Generic_147440-01 sun4v sparc sun4v

3.) Perform health checks on the server again.

# zoneadm list –cv

# svcs –xv

# zfs list

# zpool list; zpool status

# uptime

SPECIAL NOTES:

1.) Upon getting the confirmation from Application/Database, we can delete the older environment using ludelete command. 

# ludelete BE

2.) If you face any problems during the booting of a server from BE  (*** Booting from Older booted environment for BACKOUT), proceed with below steps:

a.)   Boot the server in failsafe mode.

OK> boot –f Failsafe

b.)   Mount the Current boot environment (BE) root slice to some directory (like /mnt or /a). You can use the following commands in sequence to mount the BE:

# zpool import rootpool

# zfs inherit -r mountpoint rootpool/ROOT/ZFS-ABE

# zfs set mountpoint=/mnt  rootpool/ROOT/ZFS-ABE

# zfs mount rootpool/ROOT/ZFS-ABE

c.)    Now activate the boot environment and reboot your server.

# /mnt/sbin/luactivate

# init 6

Yogesh Raheja

Yogesh working as a Consultant in Unix Engineering by profession. And he has multiple years experience in Solaris, Linux , AIX and Veritas Administration. He has been certified for SCSA9, SCSA10, SCNA10, VXVM, VCS, ITILv3. He is very much passionate about sharing his knowledge with others. Specialties: Expertize in Unix/Solaris Server, Linux (RHEL), AIX, Veritas Volume Manager, ZFS, Liveupgrades, Storage Migrations, Cluster deployment (VCS and HACMP) and administration and upgrade on Banking, Telecom, IT Infrastructure, and Hosting Services.

19 Responses

  1. Rahul says:

    Hey,

    can u please provide me with “RAIDZ” tutorial, if possible ?

  2. santosh says:

    I have not clearly understood the special note 2.

  3. nikhil says:

    hi,

    Currently i want 2 start ZFS admin PDF but i too confussed bet which PDF is good to understand ZFS, also confussed bet now we have to read solaris 10 PDF or SOLARIS 11 PDF

    Kindly share the PDF link to start learnning of ZFS

  4. Martha says:

    I was very interested to read how you accomplished this without downtime, which is stated in your second paragraph; however, I counted 4 init 6 commands in the article. How does this make it no downtime?

  5. Ramdev Ramdev says:

    Hi Martha, nice point :). As you already aware  “With traditional procedure  ( without live upgrade) we need to have the server down while installing the patches”.  
           In this doc, yogesh’s “no downtime” statement refers to the specific part of the procedure ( i.e. create new BE, and apply patch to the NEW BE).   As per the complete procedure we definitely need downtime, to activate newly patched boot environment. 

  6. Yogesh Raheja says:

    @Ram, thanks. @Martha, as Ram mentioned the downtime is required to activate new patched kernel. Till date there is NO OS which activate patched kernel without reboot. But there are some ways in which the downtime can be minimized example: Live Upgarde for Solaris, MultiBos for AIX, YUM for Linux. In all case you can upgrade your OS/Patch on ABE (Alternate Booting Environment) but in order to activate ABE at least you need single reboot. I hope this will clear you doubt otherwise we are always here to assist you with your doubts. :-) Cheers.

  7. Prince says:

    We have ZFS on root disk (rpool).
    We have 5 zones as well but they are from external storage and in different pool of ZFS (non root pool).

    Is there any different procedure to apply patch cluster on it? How will zones be upgraded and how can we rollback the zones once the zones are upgraded to NEW_BE?
    Kindly help/

  8. saikant gajula says:

    Really nice document.
    Just to add few Point To Remember for zone specific,ZFS live upgrade:-
    1.Read carefully patch read me to ensure there is enough free space under root and var as per document otherwise your patching will fail.
    2.Each zone should have it own root filesystem i.e zfs dataset and recommended free space.Incase you don’t have please do the needed migration.
    3.In case you have local dataset added to zones this might cause few problems with live upgrade(also reboots), i had similar issue so i had to remove dataset from zone.cfg of particular zone and then add them after live upgrade is finished.

  9. jairo says:

    Hi, how cant migrar or made the conversion of file system VxVm to Oracel Zpools; exist some document ?

  10. knight_owl says:

    Nice Document … Couple of question.  What are you doing with the snapshots, management wise?  You do not have any discussion about how to restore from snapshot if something were to go awry.  Also if everything goes as planned with the patching, are you leaving the snapshots out there or are you removing them?

    I am looking for ideas for patching s10 systems which have ZFS for the root file systems (rootpool) and have multiple zones on SAN attached storage (zonepool).  I am thinking snapshots are the way to go, but am looking for someone else who has experience  with this method.

  11. Maheshbabu says:

    Sir,

    Please post how the patching happens in linux servers.

  12. Elumalai says:

    Dear Ramdev,

    In our environment we have T3-4 model Solaris 10  8/11 release running with Solaris 10 and Solaris 9 containers. We are planning to apply the latest recommended patch cluster Solaris 10 1/13. 

    Please clarify the below points.

    1. I think Solaris 10 zones no need to stop while creating alternate ABE and applying the patch ?

    2. Solaris 9 container should be detached before creating the ABE and applying the patch. Once all the upgrade finished we can reattach the Solaris 9 ?

  13. Senthil says:

    hi ramdev,
    while we upgrade the pathches its shows root partition is busy, so in that situation we nee to unmount.

  14. Jay says:

    Ramdev,

    1 ) Is there any option to create ABE without halting the branded zones ?
    2 ) zones are residing in diffrent pool, what is the recommended space in the pool in which zones are residing ?
    3 ) If the zones are residing in different pool then how to overcome the mount point complications after the reboot

  15. Balu says:

    i have / running on rpool and 3 zones running on zonepool …how do i create a new environment including both the pools ? ie with / and zone roots in single

  1. September 17, 2015

    […] Read – Patching with Zones using LU (Liveupgrade) […]

What is in your mind, about this post ? Leave a Reply

Close
  Our next learning article is ready, subscribe it in your email

What is your Learning Goal for Next Six Months ? Talk to us