General Procedure for Kernel Patching in Solaris.
General Procedure for Kernel Patching in Solaris:
We are getting multiple requests for Solaris Kernel Patching procedure from many of your Gurkul Followers. Here we go with all the steps for all of them.
I am trying to present the simple patching procedure (when our disks are under Solaris Volume Manager Control, SVM). No zones, No clusters considered in this example. Basic procedure for Zones/Clusters will also remain the same but some additional outputs needs to be taken in that case.
1.) Download the bundle patch (Recommended Solaris Cluster Patch) from www.support.oracle.com (It need your login credentials for Oracle site).
This cluster patch will be in zip form, copy it on your server and unzip it where you have sufficient space.
2.) Take necessary backups of the server. (make one directory under /var/tmp and put the outputs in the server also).
Take backup of uname -a –>(will give your current Kernel Patch Level), df -k , metastat -p, metadb -i, netastat -nr, ifconfig -a, prtconf, prtdiag -v, metastat -ac (sol. 10), metastat -t, eeprom, echo | format etc.
[better make one script for youself and run that before proceeding with any activity on any server –> I used to follow this as my best practice :)]
Make a habbit of Running Sun Explorer before proceeding with any of the activity in Solaris.
3.) Identify the rootdisk and rootmirror disk with metastat -p or df -k or metastat -ac (-ac only work in Solaris 10).
4.) Dont forget to take the backups of /etc/vfstab and /etc/system.
Note: Take outputs before detaching the mirror else you would have to take the backups seperately for your detached disk.
5.) Break the mirror i.e isolate one disk (usually rootmirror as best practise).
Firstly Delete the statedatabase replica for your rootmirror disk.
metadb -i
metadb -d /dev/rdsk/<slicename>
Proceed with the mirror detach for your root-mirror disk.
metastat -p
metadeatach <mirror> <submirror for second disk>
metaclear <submirror>
Repeat the same for all submirrors for rootmirror disk.
Check the outputs by metastat -ac, metastat -p, metastat -t and metadb -i
6.) Mount your detached disk (rootmirror) on mountpoint /mnt and edit /mnt/etc/vfstab & /mnt/etc/system.
mount /dev/dsk/<rootmirror disk with slice 0) /mnt
vi /mnt/etc/vfstab
change all md (SVM) entries with native devices eg:
/dev/md/dsk/d0 —> /dev/dsk/<rootmirror disk with corresponding slice>
&
/dev/md/rdsk/d0 —> /dev/rdsk/<rootmirror disk with corresponding slice>
eg: This is only for one slice, repeat same for all slices for your rootmirror disk.
yogesh-test#grep -i /dev/md/dsk/d0 /mnt/etc/vfstab
/dev/md/dsk/d0 /dev/md/rdsk/d0 / ufs 1 no loggingyogesh-test#vi /mnt/etc/vfstab
/dev/dsk/c0t1d0s0 /dev/rdsk/c0t1d0s0 / ufs 1 no logging—————————————————————-
grep -i md /mnt/etc/system (to check SVM related entries in system file which we need to comment out in order to bring mirrordisk out of SVM).
yogesh-test#grep -i md /mnt/etc/system
* Begin MDD root info (do not edit)
rootdev:/pseudo/md@0:0,0,blk ——————-> (comment it out with star)
* End MDD root info (do not edit)After Changes the system file should look like as given below for rootmirror disk.
yogesh-test#grep -i md /mnt/etc/system
* Begin MDD root info (do not edit)
* rootdev:/pseudo/md@0:0,0,blk
* End MDD root info (do not edit)
7.) Un-mount your root-mirror disk.
umount /mnt
NOTE: 1.) Please keep in mind that all changes are applicable to rootmirror disk. Donot do any changes on other disk (i.e root-disk).
2.) It is recommended to run fsck on you rootmirror disk (slice 0, rootFS slice) so that it wont give any errors while booting.
3.) Before proceeding with the patching make sure that /var FS have enough free space as all logs will be written there. else you will get errors while patching.
8.) Now your Rootdisk is under SVM control and Rootmirror disk is under native raw Solaris disk. By doing this step we are making sure that in case if any thing goes against our action plan then we will be able to recover our server from other disk.
9.) We have to test the booting of our server from both the disks to make sure that the server will come up with both of the disks without any issues.
# init 0
At OK prompt
# devalias
# boot <rootmirror disk> ———-> (if no alias is there create it (using nvalias aliasname <diskpath>) as we have alread taken the backup of echo | format)Once the server booted from rootmirror disk, do the post checks like df -k, ifconfig etc..
Now boot the server from rootdisk as well.
#init 0
#devalias
#boot <rootdisk>
Once booted from rootdisk, do the post check like df -k, ifconfig etc..
10.) We are sure at this point of time that our server is coming up from both the disks.
Here we go with the patching now, boot your server in single your mode with your root-disk.
# init 0
# devalias
# boot rootdisk -s
# mount your slice where the unzipped bundle patch file is placed
# Go-though the read-me file as well.
# and run the installpatchset script with passcode to start bundle patching###########EXAMPLE: Bundle Patch Installation##############
# Run the installpatchset script.
# cd 10_Recommended
# ./installpatchset –s10patchset ———–(passcode required for script execution, which you can find from Cluster Patch ReadMe file))
Setup …..
Recommended OS Patchset Solaris 10 SPARC (2011.06.22)The patch set will complete installation in this session. No intermediate
reboots are required.Application of patches started : 2011.06.23 17:48:26
Applying 120900-04 ( 1 of 344) … skipped
Applying 121133-02 ( 2 of 344) … skipped
Applying 119254-81 ( 3 of 344) … skipped
.
.
.
Applying 146861-01 (342 of 344) … skipped
Applying 147182-01 (343 of 344) … success
Applying 147227-01 (344 of 344) … success
######################################################################################################
11.) It will take some hours (depending upon your previous kernel patch level). Once the patching gets completed, go with the reconfig reboot. (reboot — -r or init 0 and boot -r).
12.) Once the server will come up do the post-checks, compare the outputs which you have taken earlier to be pretty much sure about the success.
Check the latest kernel patch level with “uname -a” and you are done with the kernel patching :)
Note: If your server is having database & application.
1.) Before proceeding with the patching, make sure your application & databases are down.
2.) After successfully completion ask your database and application team to test/check there applications.
3.) Do not re-mirror the rootmirror disk immediately, wait for 5 to 7 days to notice any issues with the OS or application. If all is good then remirror the disks for redundancy.
4.) Shutdown —> Applications first –> DB second –> then proceed with OS activity.
Startup —> OS first –> DB second –> Application at last.
5.) There will be some difference in the procedure if zones are there (need to take backup of .xml files etc etc..) or the server is in cluster (have to take some serious precaution while doing patching on cluster nodes either VCS or Sun Cluster)
Special Note: 1.) Always change the value of auto-boot to false using eeprom command when ever you are procceding with any upgrade to avaoid your system to boot automatically with other disk/media. After completion of the activity change its value to True.
yogesh-test# eeprom | grep -i auto*
auto-boot?=trueyogesh-test# eeprom auto-boot?=false
yogesh-test# eeprom | grep -i auto*
auto-boot?=false2.) I will come up with the roolback plan (If any activity failed and you have your one disk preserved) when I will post on SVM (Solaris Volume Manager). Because in roll-back you need to boot from the rootmirror disk and create all the SVM mirrors back on rootmirror disk from start and then remirror rootdisk back with rootmirror disk.
Hope this will help all Unix Champs who are waiting for this from many days. :)
@Yogesh, nothing better than this. Wow…crystal clear. You rock man..Please keep posted with these wonderful sharing of yours with us. Hatsoff to your efforts.
@Shailender, thank you very much for your words..Yes we will keep posted upto our best level…:)
@Yogesh, Awesome piece of patching document from this wonderful guy. I really salute your efforts for this free knowledge sharing. You are truely Rocking Man.One day you will be among the shining stars of Solaris experts.
@Rajnesh, your words are highly motivating us. Thanks again for your interest in our posts.
@yogesh – very good and useful stuff .
@Ram, Thank you very much!..:-)
You might want to look into liveupgrade. It has a built in feature for splitting the mirrors of an SVM mirrored root (see detach, attach, preserve options on lucreate), and it allows you to do all of this patching without taking a long outage. The only outage is for the reboot after the patching is complete. The general outline of the procedure is: 1) lucreate 2) luupgrade (to add the patches), or the patch bundle/cluster now has a built in option to patch a BE 3) luactivate (to set the patched BE as the one to boot) 4) init 6
. Once you move to ZFS root instead of UFS root with SVM, it gets even easier because lucreate will automatically use ZFS snapshots and eliminates the need for splitting mirrors.
@Brian – agree with you. This post is for those sysadmins who are still supporting old solaris environment. Thanks for your comment , pls feel free comment on other posts if you find anything useful
Great one man. Thanks for this.
Ok. Live Upgrade should work back as far as Solaris 8.
For anyone using this procedure, at step 11, you should not use the reboot command. It bypasses some of the normal shutdown procedure. Sun/Oracle recommends using init or shutdown. A reconfigure reboot is triggered by either: touch /reconfigure ; init 6 OR “init 0” followed by “boot -r”. Since you are in single user mode, most of the time, the use of “reboot” won’t cause you any problems, but it’s still not the recommended way.
@Brian – we found “reboot –r” riskless and fast to use in single user mode, when the volume of servers that we work in a weekend in in 100s. …. The other two options ( touch /reconfiugre and init 6 or boot -r “) we do normally use when we have to maintenance that required a reboot from multiuser mode. … anyway, we appreciate your point of safety.
@Brain, I would appreciate your interest in our posts. we have posted this patching procedure for the begineers so that they will find the kernel patching as simple as possible. Liveupgrade is the method which we will bring in our post in near future. As I am planning to post (OS patching/Release upgrade/OS upgrade) via LU. But before moving to that we would like to present all the major activities in much simpler way so that if a begineer go through our post he/she will find all the concepts in the post. Which will be easy for them in step by step approach.
@Kishor, you are most welcome.
@Brain, in step 11, you point is valid if we are executing only “reboot” command which will bypass all rc scripts (i.e recommended shutdown procedure). But here we are executing “reconfig reboot” (i.e reboot — -r). Also I agreed with Ram at the same time, its riskless. Moreover if you will read the readme file for any patch which needs reconfig reboot. You will get all options (boot -r, touch /reconfigure, reboot — -r). Which means all are safe to use. Also I used to follow the same command while doing patching/upgrades via LU. Thanks for your suggestion, kindly keep us posted with all your suggestions.
Good stuff..yogesh. we can take explorer out put also before doing it. Thank you.
@Bas, yes I forgot to mention that. Its best practice to run explorer before doing any activity. Thank you..
Excellent procedure, congrats, just a sugestion, instead of taking a config snapshot with an ad-hoc script, use Explorer, this would take all.
@Daniel, thanks for your words. Yes, explorer is must to run before proceeding with any activity. I will amend this in special notes.
Instead of modifying the root slices in vfstab & /etc/system you can use the metaroot command with the -R switch
i.e. metaroot -R /mnt /dev/dsk/c0t1d0s0
This will fix the /etcsystem & vfstab entries. – one easy step.. I agree with the comment about liveupgrade
Thanks for the info. Kevin.
@kevin – that’s nice tip. Thanks.
Hi,
Keep it up, nice break up for patching procedure for those who are at basic level or want to learn the things from a basic level.
The procedure given above is still being followed by many small scale setups and as I said this information gives good insights to patching topic of Solaris.
I would highly recommend & prefer using Live Upgrade for patching on any Solaris version (8, 9 & 10) along with patching tool called PCA.
Benefits using Live Upgrade –
1. As rightly said by Brian, build-in feature for splitting the mirrors of an SVM mirrored root (see detach, attach, preserve options on lucreate)
e.g –
# lucreate -c Sol10 -n Sol10p -m /:/dev/md/dsk/d10:ufs,mirror -m /:/dev/md/dsk/d2:detach,attach -m /var:/dev/md/dsk/d13:ufs,mirror –m /var:/dev/md/dsk/d3:detach,attach
2. Less downtime (not more than 15-20 mins)
3. Better back out option. In case something cribs after patching revert to alternate BE, again that doesn’t take much downtime – same as above
4. Appropriate option for those Solaris boxes who have zones installed on it.
Thanks,
Nilesh
@Nilesh, Thanks for comments. I am totally agree with your point that LU is the method which requires single reboot to activate ABE (alternate boot environment).
Thanks Yogesh for the wonderful article. pls post detail procedure for Live Upgrade.
You are welcome @Vishvas, surely I will post the same post via LU. :-)
OS patching Solaris 10:
1. Post mirror break we can also run bootadm update-archive -v -R /mnt (mirror disk).
If /var file system is full we run.
Make a backup copy
# find /var/sadm/pkg -name undo.Z -o -name obsolete.Z -mtime +300 -exec ls -l {} ;
# find /var/sadm/pkg -name undo.Z -o -name obsolete.Z -mtime +300 | cpio -pdv /net/my4500/backout/`uname -n`
Free up space in /var
# find /var/sadm/pkg -name undo.Z -o -name obsolete.Z -mtime +300 | xargs rm -f
@Sreekanth, Thanks for your comments, Firstly: bootadm is primarily for x-86 architecture which will indeed update the boot-archive on the server. But yes we can use it on sparc m/c’s as well. Secondly you can also delete these undo.Z & obsolete.Z files. But normally you will delete some old logs files to clear up the space.
Excellent Patching procedure!!!!!!!! It is very much useful not only for beginners but for experts as well in some times
expecting the same for OS Live upgradation as well
@Raj, Thanks for your words. The expectations mentioned will be converted into reality soon :)
@Yogesh , It’s really very useful stuff. Thank you very much for this article.
@Ammiraju, thank you very much!!!…If I am correct then you Ammiraju Attauri. :)
Yes. You are always correct and smart.
@Ammiraju, Nice to see you on our Gurkulindia and thanks for your interest in our posts.
@Yogesh gud stuff to do patching
@Sunil, thanks mate!!..
Hi Yogesh,
thanks for the excellent post, it helped me to understand a lot
a small dount, in case if we are using VxVm in our server what files need to take back-up?
in this post you showed backup of SVM files,
and also
How to break VxVM mirror?
and in case of VxVM what would be the 6, 8, 9 steps?
@Santosh, we will preparing a post for the same and will publish it very soon. That will give you step by step approach for vxvm.
Yogesh, i have a small doubt..
while kernel patching we are taking the backup of mirror disk and Root risk, so what is the purpose of step 6 & 7?
and also why to test boot with the both the disk’s separately i.e. Step 9?
sorry if this is a small doubt, i am new to this course so just learning all the things..tht’s it.
@santhosh – consider the situation that second disk detached from the root mirror failed either by human error or disk block errors .. Extra full bkp will save from such unexpected issues with hardware during patching / upgrades. About the second disk booting, some times second disk will show symptoms of boot block errors after detaching from the root mirror . We want to make sure that we don’t have that situation for us.
@Santosh, Backout plan in any activity is much more important than the activity itself. Because any downtime after the failure of the activity can be avoided if the backout plan and testing prior to the activity has been performed perfectly. Thats the reason the second disk is detached and kept save in order to avoid any downtime in case of failure. Also Ram have explained the purpose of second disk.
Thank you so much Yogesh & Ramdev..you people ROCK..
thanks for helping me out to understand better..
and because of you people i learn a lot, and still learning..
Hi Yogesh, excellent post.
I am planning to migrate my old solaris 8 servers to solaris 10. Could you please suggest me which procedure is best to follow Live upgradation or New Installation ?
@Santosh, you are most welcome. :-)
@Sandeep, thanks for you words. I would suggest to go with Liveupgrade. We would have one disk having sol.8 and you can install the sol10 on the second disk using LU utility. Test you apps/DB as per your environment and if all goes in favour, delete the older LU having sol8 and mirror you system having sol10. Hope this helps.
Hi Yogesh, thanks for your suggestion. I have small doubt…. do we need install any patches bedore start the Live upgrade on solaris 8 server ?
@Sandeep, no need to install any patches, because the sol10 image will do the installation and install all the required patch/pkgs itself. But on the safer side install the latest LU package which I have stated in the post also. Sometime we will face issue bzu of older LU pkg version.
Hi yogesh. Very good article. I need to know the back out procedure exactly. is it correct that we will boot with the second disk if patch fails?
@Dev, thanks for your comment. Yes you need to boot from the second disk which we have detached and have to create the mirror/submirrors freshly on this disk and have to mirror it back with Root disk (i.e have to take the disk under SVM).Â
yogesh thanks for your reply. i understand that we need to boot with the disk we have detached. so you are saying that we need to again create mirrors freshly( means clearing all metadbs( replicas) and create again a new mirror for all slices and attach to root disk ( this root disk is the main disk , am i correct)
so with out creating the fresh mirrors cant we directly attach the mirrors to the root disk?(attaching from the disk we have booted after patch failed). am confusing alot here :(
one more doubt.. after successful installation of Kernel patch will there any log saved related to the patch installation is successful?( i mean like var/adm/messages will there be any file saved?)
@Dev, when you will detach root mirror disk then it will be out of SVM i.e on native raw device. (root disk will be under SVM and detached mirror disk will be under raw (dev/dsk/c#t#d#s#) format). So there is no question of deleting meta-state database devices from detached mirror disk. Hope you get the point this time. Secondly yes you will be able to find the logs under /var/sadm directory.Â
Thank you yogesh.got the point .if possible please post back out procedure ..waiting for reply
@Dev, sure will prepare it and post the same soon.
Thank you very much :) waiting for your post soon. I will keep in touch….
Thank you yogesh . But if we have non global zone configured with root disk on svm , what needs to be done in order to do succesfull patchingÂ
@Rony, there would be two options now. before proceeding take a copy of /etc/zones directory which contains files for the zones which indeed contains the complete information (FS, IP, resources etc etc..) for zones. 1st option will be detach the mirror as already stated above and before proceeding with the patching Shutdown the zone or HALT it or bring it in OK prompt (preferred would be int 0, simplest way is login into the zone and type int 0). Start patching normally every time when recommended reboot is required the zone will boot up by reading .xml file. PS: here all of the Zones filesysten will mount and the booting process will take more time while booting. But this is one of the option which can be used.
@Rony, second option which I personally preferred is: edit the /etc/ file. where removed the all zones related filesystems entries and save the file. (make sure you have the backup of /etc/zones directory). Now hashed (#) all the removed FS in /etc/vfstab (dont hashed /zones/hosts/ & /zones/filesystem/ filesystems, as these would be required for the booting of local-zones). Without these FS your local zone will remain in installed mode so these two filesystems would be present while system comes up. After that the steps would remain the same as stated above. Once patching will complete, revert back the file back and either reboot the system or manullly mount the filesystems.(dont forget to unhased the zones related filesystem in /etc/vfstab). Hope this helps. This is the method which I usually preferred while doing patching in servers having many zones.
Thanks Yogesh for replying. This type of patching is schedule for me next week, Â so was getting prepared for it. And also can you post the patching procedure for root disk mirror under VXVM. What all steps to follow on solaris 9Â
Great work yogesh , Very good article … waiting for your roolback doc ..
@Rony/Ramesh, thanks, we will be preparing the rollback and patching under VXVM and will post them for you champs!!…thanks for your boosting comments..:-)
Thx Yogesh, another way to make kernel patching easyer.
Regards.
Superb Yogesh…Really appreciate if you post more to us.
@Krishna, thanks. We are trying our level best to get some time out of our Normal working hours to provide you as much information as we can. And I think Ram is doing that exceptionally well.
Hi Yogesh…Can I have patching procedure in VxVm.
@Krishna, sure I will try to provide you same very soon.
hi yogesh,
above stuff is good. and i would like to know the backout plan procedure.
will you please provide it.
thanq yogesh
@Sekhar/Prajwala, thanks for you interest at gurkul. I will make sure that you would have backout plan with all details at our gurkul very soon.
its very interesting………………
and very very usefull …………………..
thanks dude
i have followed this strategy. Reboot the system then boot from disk2 to check system boots up & then just detach disk2 thru metadetach. After that boot from disk1 & go ahead & install the patch. Is there any problem on this?
@Frank – you are booting from disk2 while it is part of SVM and many times you wont catch any issues except  the hard errors, because the vfstab still referring to mirror devices.
The whole purpose of  “disk2 boot test ” is to make sure it has no  boot/filesystem errors after detaching it from SVM.
You have procedure for VCS cluster node patching on solaris.I have solaris env with VCS,Need to do soalris patching.
it will be great help if you can provide the steps for the same.
Hi Abishek, Unfortunately we dont have ready procedure available for this task. But we can give guidelines to perform this task. I am assuming you are using SVM for root disk mirror.
1. Offline Patching :
>> Stop VCS , Disable VCS startup SCripts, Split mirror and test booting with second one.
>> patch the kernel either from multiuser more or maintenance mode as per the patch’s readme instructions
>> Â bring up both the servers, and start the VCS manually and test the resource Groups.
>> if all looks good then enable startup scripts ( and optionally you can do one more sanity reboot to check the cluster is starting without trouble)
>>> finally reattach the mirrorsÂ
2. Online patching ( little risky ):
>> move the service groups to one node and stop the VCS on the second node.
>> Prepare the Second node for patching – disbale the VCS startup script , Split root mirror , test that second disk booting properly.
>> patch the server and test with rebooting. and  startup the cluster and see you are able to switchover the service groups to the patced machine, one by one without trouble.
>>> once all moved to second node enable the VCS startup script, then stop the VCS on first node. And patch the first node  similar to above steps.Â
>> once patching done and server rebooted, balance the service groups as per the initial setup.
>>> if all looks good, then attach the mirrors.
HI Ramdev Sir//Gurkul Team, This docs is very userful but my query is different. Sir pls upload some info about “VERITAS NETBACKUP” atleast basic to crack the interview…
Many Thanks, Chetan
@Chetan, Â So far whatever we have posted here is what we are good at. Â For Netbackup stuff, I am waiting for some experts to join our team. You will soon see some stuff about that.
@Chetan, if you want I can assist you with the backups and restores in netbackup and some upgrades, installations (client side) and main config. files.
@Abishek, how many cluster nodes are you using in your environment?..I think the steps provided by Ram is very much clear (using offline, with full downtime (recommended) and online by failover the SG to other node and then doing patching one by one). I will try my level best to post patching steps under VCS soon.
Well prepared writing Yogesh, thanks a lot and keep posting…!
10.27.1.241
HI Gurkul Team, Thnks 4 helping hand, but i faced diff prom after KJP on on v490 server, After KJp i unable to send mail from server
Oct 10 06:40:50 RETCRMAPP3 sendmail[1126]: [ID 702911 mail.alert] daemon MTA-v6: problem creating SMTP socket
Oct 10 06:40:55 RETCRMAPP3 sendmail[1126]: [ID 801593 mail.crit] NOQUEUE: SYSERR(root): opendaemonsocket: daemon MTA-v6: cannot bind: Cannot assign requested address
Oct 10 06:40:55 RETCRMAPP3 sendmail[1126]: [ID 702911 mail.alert] daemon MTA-v6: problem creating SMTP socket
Oct 10 06:41:00 RETCRMAPP3 sendmail[1126]: [ID 801593 mail.crit] NOQUEUE: SYSERR(root): opendaemonsocket: daemon MTA-v6: cannot bind: Cannot assign requested address
Oct 10 06:41:00 RETCRMAPP3 sendmail[1126]: [ID 702911 mail.alert] daemon MTA-v6: problem creating SMTP socket
Oct 10 06:41:05 RETCRMAPP3 sendmail[1126]: [ID 801593 mail.crit] NOQUEUE: SYSERR(root): opendaemonsocket: daemon MTA-v6: cannot bind: Cannot assign requested address
Oct 10 06:41:05 RETCRMAPP3 sendmail[1126]: [ID 702911 mail.alert] daemon MTA-v6: problem creating SMTP socket
Oct 10 06:41:05 RETCRMAPP3 sendmail[1126]: [ID 801593 mail.alert] NOQUEUE: SYSERR(root): opendaemonsocket: daemon MTA-v6: server SMTP socket wedged: exiting
Oct 10 06:41:05 RETCRMAPP3 svc.startd[8]: [ID 748625 daemon.error] network/smtp:sendmail failed repeatedly: transitioned to maintenance (see ‘svcs -xv’ for details) the “svc:/network/smtp:sendmail” service in maintaince mode I restart but still prom same.
@Nithin, thanks a lot :).
@Chetan, the patching will change the parameter called DS in /etc/mail/sendmailcf, revert the same to older one and restart the sendmail. (grep -i DS /etc/mail/sendmail.cf) it should return you empty value after patching. Replace this DS field with older value. I think the OS keeps a copy at the same location too for sendmail.cf.
HI, Many thanks 4 reply, but issue is reslove now…. disable the cleint service & restart the sendmail promblem resloved….
Wow Nice article Yogesh..I am badly in need of vxvm root disk patching and roll-back stuff..Could you guide me ?
@Chetan, dats nice to hear.
@Shahul, I will try to post the same soon.
hey guys. I have a host run Sol 10. It has several containers but one of the container is Sol 9. I used the traditional method of patching but I am now going to convert the system to ZFS and then us live upgrade to patch it. I’m wondering if the Sol 9 container is going to cause problem or is there away to get around the Sol 9 container.
Very usefull to all..
Thanks
How is patching done in ZFS ?
Hi Yogesh,
Thx for this useful post. what are the steps which I need to perform if I have to patch a server having 2-3 zones.Â
Do we need to detach and attach the zones after the patch is applied to global zone i,e base server.
Could you please share your views on it.
Hi Vaibhav, Yogesh already posted a procedure for that http://gurkulindia.com/main/2011/10/solaris-10-patching-solaris-10-on-servers-with-non-global-zones/
Yogesh………….nice doc…………crystal clear specially for folks who are well versed in other Unix OS but not on Solaris that much………..
@ Yogesh, Could you please post the procedure for kernel patching on SUN cluster nodes?