General Procedure for Kernel Patching in Solaris.

Avatar

Yogesh Raheja

Yogesh working as a Consultant in Unix Engineering by profession. And he has multiple years experience in Solaris, Linux , AIX and Veritas Administration. He has been certified for SCSA9, SCSA10, SCNA10, VXVM, VCS, ITILv3. He is very much passionate about sharing his knowledge with others. Specialties: Expertize in Unix/Solaris Server, Linux (RHEL), AIX, Veritas Volume Manager, ZFS, Liveupgrades, Storage Migrations, Cluster deployment (VCS and HACMP) and administration and upgrade on Banking, Telecom, IT Infrastructure, and Hosting Services.

Loading Facebook Comments ...

95 Responses

  1. Avatar Shailender Singh says:

    @Yogesh, nothing better than this. Wow…crystal clear. You rock man..Please keep posted with these wonderful sharing of yours with us. Hatsoff to your efforts.

  2. Avatar Yogesh Raheja says:

    @Shailender, thank you very much for your words..Yes we will keep posted upto our best level…:)

  3. Avatar Rajnesh Phogat says:

    @Yogesh, Awesome piece of patching document from this wonderful guy. I really salute your efforts for this free knowledge sharing. You are truely Rocking Man.One day you will be among the shining stars of Solaris experts.

  4. Avatar Yogesh Raheja says:

    @Rajnesh, your words are highly motivating us. Thanks again for your interest in our posts.

  5. Avatar Yogesh Raheja says:

    @Ram, Thank you very much!..:-)

  6. Avatar Brian King says:

    You might want to look into liveupgrade. It has a built in feature for splitting the mirrors of an SVM mirrored root (see detach, attach, preserve options on lucreate), and it allows you to do all of this patching without taking a long outage. The only outage is for the reboot after the patching is complete. The general outline of the procedure is: 1) lucreate 2) luupgrade (to add the patches), or the patch bundle/cluster now has a built in option to patch a BE 3) luactivate (to set the patched BE as the one to boot) 4) init 6
    . Once you move to ZFS root instead of UFS root with SVM, it gets even easier because lucreate will automatically use ZFS snapshots and eliminates the need for splitting mirrors.

    • Ramdev says:

      @Brian – agree with you. This post is for those sysadmins who are still supporting old solaris environment. Thanks for your comment , pls feel free comment on other posts if you find anything useful

  7. Great one man. Thanks for this.

  8. Avatar Brian King says:

    Ok. Live Upgrade should work back as far as Solaris 8.
    For anyone using this procedure, at step 11, you should not use the reboot command. It bypasses some of the normal shutdown procedure. Sun/Oracle recommends using init or shutdown. A reconfigure reboot is triggered by either: touch /reconfigure ; init 6 OR “init 0” followed by “boot -r”. Since you are in single user mode, most of the time, the use of “reboot” won’t cause you any problems, but it’s still not the recommended way.

  9. Avatar ramdev says:

    @Brian – we found “reboot –r” riskless and fast to use in single user mode, when the volume of servers that we work in a weekend in in 100s. …. The other two options ( touch /reconfiugre and init 6 or boot -r “) we do normally use when we have to maintenance that required a reboot from multiuser mode. … anyway, we appreciate your point of safety.

  10. Avatar Yogesh Raheja says:

    @Brain, I would appreciate your interest in our posts. we have posted this patching procedure for the begineers so that they will find the kernel patching as simple as possible. Liveupgrade is the method which we will bring in our post in near future. As I am planning to post (OS patching/Release upgrade/OS upgrade) via LU. But before moving to that we would like to present all the major activities in much simpler way so that if a begineer go through our post he/she will find all the concepts in the post. Which will be easy for them in step by step approach.

  11. Avatar Yogesh Raheja says:

    @Kishor, you are most welcome.

  12. Avatar Yogesh Raheja says:

    @Brain, in step 11, you point is valid if we are executing only “reboot” command which will bypass all rc scripts (i.e recommended shutdown procedure). But here we are executing “reconfig reboot” (i.e reboot — -r). Also I agreed with Ram at the same time, its riskless. Moreover if you will read the readme file for any patch which needs reconfig reboot. You will get all options (boot -r, touch /reconfigure, reboot — -r). Which means all are safe to use. Also I used to follow the same command while doing patching/upgrades via LU. Thanks for your suggestion, kindly keep us posted with all your suggestions.

  13. Avatar Bas says:

    Good stuff..yogesh. we can take explorer out put also before doing it. Thank you.

  14. Avatar Yogesh Raheja says:

    @Bas, yes I forgot to mention that. Its best practice to run explorer before doing any activity. Thank you..

  15. Avatar Daniel Wainstein says:

    Excellent procedure, congrats, just a sugestion, instead of taking a config snapshot with an ad-hoc script, use Explorer, this would take all.

  16. Avatar Yogesh Raheja says:

    @Daniel, thanks for your words. Yes, explorer is must to run before proceeding with any activity. I will amend this in special notes.

  17. Avatar Kevin Hunt says:

    Instead of modifying the root slices in vfstab & /etc/system you can use the metaroot command with the -R switch
    i.e. metaroot -R /mnt /dev/dsk/c0t1d0s0
    This will fix the /etcsystem & vfstab entries. – one easy step.. I agree with the comment about liveupgrade

  18. Avatar Yogesh Raheja says:

    Thanks for the info. Kevin.

  19. Avatar Ramdev says:

    @kevin – that’s nice tip. Thanks.

  20. Avatar Nilesh Joshi says:

    Hi,

    Keep it up, nice break up for patching procedure for those who are at basic level or want to learn the things from a basic level.

    The procedure given above is still being followed by many small scale setups and as I said this information gives good insights to patching topic of Solaris.

    I would highly recommend & prefer using Live Upgrade for patching on any Solaris version (8, 9 & 10) along with patching tool called PCA.

    Benefits using Live Upgrade –

    1. As rightly said by Brian, build-in feature for splitting the mirrors of an SVM mirrored root (see detach, attach, preserve options on lucreate)
    e.g –
    # lucreate -c Sol10 -n Sol10p -m /:/dev/md/dsk/d10:ufs,mirror -m /:/dev/md/dsk/d2:detach,attach -m /var:/dev/md/dsk/d13:ufs,mirror –m /var:/dev/md/dsk/d3:detach,attach

    2. Less downtime (not more than 15-20 mins)

    3. Better back out option. In case something cribs after patching revert to alternate BE, again that doesn’t take much downtime – same as above

    4. Appropriate option for those Solaris boxes who have zones installed on it.

    Thanks,
    Nilesh

  21. Avatar Yogesh.Raheja says:

    @Nilesh, Thanks for comments. I am totally agree with your point that LU is the method which requires single reboot to activate ABE (alternate boot environment).

  22. Avatar Vishvas says:

    Thanks Yogesh for the wonderful article. pls post detail procedure for Live Upgrade.

  23. Avatar Yogesh Raheja says:

    You are welcome @Vishvas, surely I will post the same post via LU. :-)

  24. Avatar sreekanth says:

    OS patching Solaris 10:
    1. Post mirror break we can also run bootadm update-archive -v -R /mnt (mirror disk).

    If /var file system is full we run.
    Make a backup copy
    # find /var/sadm/pkg -name undo.Z -o -name obsolete.Z -mtime +300 -exec ls -l {} ;
    # find /var/sadm/pkg -name undo.Z -o -name obsolete.Z -mtime +300 | cpio -pdv /net/my4500/backout/`uname -n`

    Free up space in /var
    # find /var/sadm/pkg -name undo.Z -o -name obsolete.Z -mtime +300 | xargs rm -f

  25. Avatar Yogesh Raheja says:

    @Sreekanth, Thanks for your comments, Firstly: bootadm is primarily for x-86 architecture which will indeed update the boot-archive on the server. But yes we can use it on sparc m/c’s as well. Secondly you can also delete these undo.Z & obsolete.Z files. But normally you will delete some old logs files to clear up the space.

  26. Avatar Raj says:

    Excellent Patching procedure!!!!!!!! It is very much useful not only for beginners but for experts as well in some times
    expecting the same for OS Live upgradation as well

  27. Avatar Yogesh Raheja says:

    @Raj, Thanks for your words. The expectations mentioned will be converted into reality soon :)

  28. Avatar Ammiraju says:

    @Yogesh , It’s really very useful stuff. Thank you very much for this article.

  29. Avatar Yogesh Raheja says:

    @Ammiraju, thank you very much!!!…If I am correct then you Ammiraju Attauri. :)

  30. Avatar Ammiraju says:

    Yes. You are always correct and smart.

  31. Avatar Yogesh Raheja says:

    @Ammiraju, Nice to see you on our Gurkulindia and thanks for your interest in our posts.

  32. Avatar Sunil.ch says:

    @Yogesh gud stuff to do patching

  33. Avatar Yogesh Raheja says:

    @Sunil, thanks mate!!..

  34. Avatar Santosh says:

    Hi Yogesh,
    thanks for the excellent post, it helped me to understand a lot
    a small dount, in case if we are using VxVm in our server what files need to take back-up?
    in this post you showed backup of SVM files,
    and also
    How to break VxVM mirror?
    and in case of VxVM what would be the 6, 8, 9 steps?

  35. Avatar Yogesh Raheja says:

    @Santosh, we will preparing a post for the same and will publish it very soon. That will give you step by step approach for vxvm.

  36. Avatar Santosh says:

    Yogesh, i have a small doubt..
    while kernel patching we are taking the backup of mirror disk and Root risk, so what is the purpose of step 6 & 7?
    and also why to test boot with the both the disk’s separately i.e. Step 9?
    sorry if this is a small doubt, i am new to this course so just learning all the things..tht’s it.

    • Ramdev says:

      @santhosh – consider the situation that second disk detached from the root mirror failed either by human error or disk block errors .. Extra full bkp will save from such unexpected issues with hardware during patching / upgrades. About the second disk booting, some times second disk will show symptoms of boot block errors after detaching from the root mirror . We want to make sure that we don’t have that situation for us.

  37. Avatar Yogesh Raheja says:

    @Santosh, Backout plan in any activity is much more important than the activity itself. Because any downtime after the failure of the activity can be avoided if the backout plan and testing prior to the activity has been performed perfectly. Thats the reason the second disk is detached and kept save in order to avoid any downtime in case of failure. Also Ram have explained the purpose of second disk.

  38. Avatar Santosh says:

    Thank you so much Yogesh & Ramdev..you people ROCK..
    thanks for helping me out to understand better..
    and because of you people i learn a lot, and still learning..

  39. Avatar Sandeep says:

    Hi Yogesh, excellent post.

    I am planning to migrate my old solaris 8 servers to solaris 10. Could you please suggest me which procedure is best to follow Live upgradation or New Installation ?

  40. Avatar Yogesh Raheja says:

    @Santosh, you are most welcome. :-)

  41. Avatar Yogesh Raheja says:

    @Sandeep, thanks for you words. I would suggest to go with Liveupgrade. We would have one disk having sol.8 and you can install the sol10 on the second disk using LU utility. Test you apps/DB as per your environment and if all goes in favour, delete the older LU having sol8 and mirror you system having sol10. Hope this helps.

  42. Avatar sandeep says:

    Hi Yogesh, thanks for your suggestion. I have small doubt…. do we need install any patches bedore start the Live upgrade on solaris 8 server ?

  43. Avatar Yogesh Raheja says:

    @Sandeep, no need to install any patches, because the sol10 image will do the installation and install all the required patch/pkgs itself. But on the safer side install the latest LU package which I have stated in the post also. Sometime we will face issue bzu of older LU pkg version.

  44. Avatar Dev says:

    Hi yogesh. Very good article. I need to know the back out procedure exactly. is it correct that we will boot with the second disk if patch fails?

  45. Avatar Yogesh Raheja says:

    @Dev, thanks for your comment. Yes you need to boot from the second disk which we have detached and have to create the mirror/submirrors freshly on this disk and have to mirror it back with Root disk (i.e have to take the disk under SVM). 

  46. Avatar Dev says:

    yogesh thanks for your reply. i understand that we need to boot with the disk we have detached. so you are saying that we need to again create mirrors freshly( means clearing all metadbs( replicas) and create again a new mirror for all slices and attach to root disk ( this root disk is the main disk , am i correct)
    so with out creating the fresh mirrors cant we directly attach the mirrors to the root disk?(attaching from the disk we have booted after patch failed). am confusing alot here :(

  47. Avatar Dev says:

    one more doubt.. after successful installation of Kernel patch will there any log saved related to the patch installation is successful?( i mean like var/adm/messages will there be any file saved?)

  48. Avatar Yogesh Raheja says:

    @Dev, when you will detach root mirror disk then it will be out of SVM i.e on native raw device. (root disk will be under SVM and detached mirror disk will be under raw (dev/dsk/c#t#d#s#) format). So there is no question of deleting meta-state database devices from detached mirror disk. Hope you get the point this time. Secondly yes you will be able to find the logs under /var/sadm directory. 

  49. Avatar Dev says:

    Thank you yogesh.got the point .if possible please post back out procedure ..waiting for reply

  50. Avatar Yogesh Raheja says:

    @Dev, sure will prepare it and post the same soon.

  51. Avatar Dev says:

    Thank you very much :) waiting for your post soon. I will keep in touch….

  52. Avatar rony says:

    Thank you yogesh . But if we have non global zone configured with root disk on svm , what needs to be done in order to do succesfull patching 

  53. Avatar Yogesh Raheja says:

    @Rony, there would be two options now. before proceeding take a copy of /etc/zones directory which contains files for the zones which indeed contains the complete information (FS, IP, resources etc etc..) for zones. 1st option will be detach the mirror as already stated above and before proceeding with the patching Shutdown the zone or HALT it or bring it in OK prompt (preferred would be int 0, simplest way is login into the zone and type int 0). Start patching normally every time when recommended reboot is required the zone will boot up by reading .xml file. PS: here all of the Zones filesysten will mount and the booting process will take more time while booting. But this is one of the option which can be used.

  54. Avatar Yogesh Raheja says:

    @Rony, second option which I personally preferred is: edit the /etc/ file. where removed the all zones related filesystems entries and save the file. (make sure you have the backup of /etc/zones directory). Now hashed (#) all the removed FS in /etc/vfstab (dont hashed /zones/hosts/ & /zones/filesystem/ filesystems, as these would be required for the booting of local-zones). Without these FS your local zone will remain in installed mode so these two filesystems would be present while system comes up. After that the steps would remain the same as stated above. Once patching will complete, revert back the file back and either reboot the system or manullly mount the filesystems.(dont forget to unhased the zones related filesystem in /etc/vfstab). Hope this helps. This is the method which I usually preferred while doing patching in servers having many zones.

  55. Avatar rony says:

    Thanks Yogesh for replying. This type of patching is schedule for me next week,  so was getting prepared for it. And also can you post the patching procedure for root disk mirror under VXVM. What all steps to follow on solaris 9 

  56. Avatar Ramesh says:

    Great work yogesh , Very good article … waiting for your roolback doc ..

  57. Avatar Yogesh Raheja says:

    @Rony/Ramesh, thanks, we will be preparing the rollback and patching under VXVM and will post them for you champs!!…thanks for your boosting comments..:-)

  58. Avatar CHERIF Malik says:

    Thx Yogesh, another way to make kernel patching easyer.
    Regards.

  59. Avatar krishna says:

    Superb Yogesh…Really appreciate if you post more to us.

  60. Avatar Yogesh Raheja says:

    @Krishna, thanks. We are trying our level best to get some time out of our Normal working hours to provide you as much information as we can. And I think Ram is doing that exceptionally well.

  61. Avatar krishna says:

    Hi Yogesh…Can I have patching procedure in VxVm.

  62. Avatar Yogesh Raheja says:

    @Krishna, sure I will try to provide you same very soon.

  63. Avatar sekhar says:

    hi yogesh,

    above stuff is good. and i would like to know the backout plan procedure.
    will you please provide it.

  64. Avatar prajwala says:

    thanq yogesh

  65. Avatar Yogesh Raheja says:

    @Sekhar/Prajwala, thanks for you interest at gurkul. I will make sure that you would have backout plan with all details at our gurkul very soon.

  66. Avatar hanu says:

    its very interesting………………

    and very very usefull …………………..
    thanks dude

  67. Avatar Frank says:

    i have followed this strategy. Reboot the system then boot from disk2 to check system boots up & then just detach disk2 thru metadetach. After that boot from disk1 & go ahead & install the patch. Is there any problem on this?

  68. Ramdev says:

    @Frank – you are booting from disk2 while it is part of SVM and many times you wont catch any issues except  the hard errors, because the vfstab still referring to mirror devices.

    The whole purpose of  “disk2 boot test ” is to make sure it has no  boot/filesystem errors after detaching it from SVM.

  69. Avatar Abishek says:

    You have procedure for VCS cluster node patching on solaris.I have solaris env with VCS,Need to do soalris patching.

    it will be great help if you can provide the steps for the same.

  70. Ramdev says:

    Hi Abishek, Unfortunately we dont have ready procedure available for this task. But we can give guidelines to perform this task. I am assuming you are using SVM for root disk mirror.

    1. Offline Patching :

    >> Stop VCS , Disable VCS startup SCripts, Split mirror and test booting with second one.
    >> patch the kernel either from multiuser more or maintenance mode as per the patch’s readme instructions
    >>  bring up both the servers, and start the VCS manually and test the resource Groups.
    >> if all looks good then enable startup scripts ( and optionally you can do one more sanity reboot to check the cluster is starting without trouble)
    >>> finally reattach the mirrors 

    2. Online patching ( little risky ):

    >> move the service groups to one node and stop the VCS on the second node.
    >> Prepare the Second node for patching – disbale the VCS startup script , Split root mirror , test that second disk booting properly.
    >> patch the server and test with rebooting. and  startup the cluster and see you are able to switchover the service groups to the patced machine, one by one without trouble.
    >>> once all moved to second node enable the VCS startup script, then stop the VCS on first node. And patch the first node  similar to above steps. 
    >> once patching done and server rebooted, balance the service groups as per the initial setup.
    >>> if all looks good, then attach the mirrors.

  71. Avatar Chetan says:

    HI Ramdev Sir//Gurkul Team, This docs is very userful but my query is different. Sir pls upload some info about “VERITAS NETBACKUP” atleast basic to crack the interview…

    Many Thanks, Chetan

  72. Ramdev says:

    @Chetan,  So far whatever we have posted here is what we are good at.  For Netbackup stuff, I am waiting for some experts to join our team. You will soon see some stuff about that.

  73. Avatar Yogesh Raheja says:

    @Chetan, if you want I can assist you with the backups and restores in netbackup and some upgrades, installations (client side) and main config. files.

  74. Avatar Yogesh Raheja says:

    @Abishek, how many cluster nodes are you using in your environment?..I think the steps provided by Ram is very much clear (using offline, with full downtime (recommended) and online by failover the SG to other node and then doing patching one by one). I will try my level best to post patching steps under VCS soon.

  75. Avatar Nithin says:

    Well prepared writing Yogesh, thanks a lot and keep posting…!

  76. Avatar Chetan says:

    10.27.1.241

    HI Gurkul Team, Thnks 4 helping hand, but i faced diff prom after KJP on on v490 server, After KJp i unable to send mail from server

    Oct 10 06:40:50 RETCRMAPP3 sendmail[1126]: [ID 702911 mail.alert] daemon MTA-v6: problem creating SMTP socket
    Oct 10 06:40:55 RETCRMAPP3 sendmail[1126]: [ID 801593 mail.crit] NOQUEUE: SYSERR(root): opendaemonsocket: daemon MTA-v6: cannot bind: Cannot assign requested address
    Oct 10 06:40:55 RETCRMAPP3 sendmail[1126]: [ID 702911 mail.alert] daemon MTA-v6: problem creating SMTP socket
    Oct 10 06:41:00 RETCRMAPP3 sendmail[1126]: [ID 801593 mail.crit] NOQUEUE: SYSERR(root): opendaemonsocket: daemon MTA-v6: cannot bind: Cannot assign requested address
    Oct 10 06:41:00 RETCRMAPP3 sendmail[1126]: [ID 702911 mail.alert] daemon MTA-v6: problem creating SMTP socket
    Oct 10 06:41:05 RETCRMAPP3 sendmail[1126]: [ID 801593 mail.crit] NOQUEUE: SYSERR(root): opendaemonsocket: daemon MTA-v6: cannot bind: Cannot assign requested address
    Oct 10 06:41:05 RETCRMAPP3 sendmail[1126]: [ID 702911 mail.alert] daemon MTA-v6: problem creating SMTP socket
    Oct 10 06:41:05 RETCRMAPP3 sendmail[1126]: [ID 801593 mail.alert] NOQUEUE: SYSERR(root): opendaemonsocket: daemon MTA-v6: server SMTP socket wedged: exiting
    Oct 10 06:41:05 RETCRMAPP3 svc.startd[8]: [ID 748625 daemon.error] network/smtp:sendmail failed repeatedly: transitioned to maintenance (see ‘svcs -xv’ for details) the “svc:/network/smtp:sendmail” service in maintaince mode I restart but still prom same.

  77. Avatar Yogesh Raheja says:

    @Nithin, thanks a lot :).

  78. Avatar Yogesh Raheja says:

    @Chetan, the patching will change the parameter called DS in /etc/mail/sendmailcf, revert the same to older one and restart the sendmail. (grep -i DS /etc/mail/sendmail.cf) it should return you empty value after patching. Replace this DS field with older value. I think the OS keeps a copy at the same location too for sendmail.cf.

  79. Avatar Chetan says:

    HI, Many thanks 4 reply, but issue is reslove now…. disable the cleint service & restart the sendmail promblem resloved….

  80. Avatar Shahul says:

    Wow Nice article Yogesh..I am badly in need of vxvm root disk patching and roll-back stuff..Could you guide me ?

  81. Avatar Yogesh Raheja says:

    @Chetan, dats nice to hear.

  82. Avatar Yogesh Raheja says:

    @Shahul, I will try to post the same soon.

  83. Avatar Ryan says:

    hey guys. I have a host run Sol 10. It has several containers but one of the container is Sol 9. I used the traditional method of patching but I am now going to convert the system to ZFS and then us live upgrade to patch it. I’m wondering if the Sol 9 container is going to cause problem or is there away to get around the Sol 9 container.

  84. Avatar RAMA RAO VASIMALLA says:

    Very usefull to all..

  85. Avatar Manikandan says:

    Thanks

  86. Avatar vijay says:

    How is patching done in ZFS ?

  87. Avatar vaibhav kanchan says:

    Hi Yogesh,

    Thx for this useful post. what are the steps which I need to perform if I have to patch a server having 2-3 zones. 

    Do we need to detach and attach the zones after the patch is applied to global zone i,e base server.

    Could you please share your views on it.

  88. Avatar Debiprosad says:

    Yogesh………….nice doc…………crystal clear specially for folks who are well versed in other Unix OS but not on Solaris that much………..

  89. Avatar Harish says:

    @ Yogesh, Could you please post the procedure for kernel patching on SUN cluster nodes?

  1. September 16, 2015

    […] Read – General Procedure for Kernel Patching in Solaris. […]

  2. September 17, 2015

    […] Read – General Procedure for Kernel Patching in Solaris. […]

What is in your mind, about this post ? Leave a Reply

Follow

Get every new post delivered to your Inbox

Join other followers