feedback1
Follow unixadminschool.com on on Google+

Check Status of Hardware RAID connected to LSI and Adaptec Storage Controllers

 

 

Verification of Hardware RAID status can be performed either from the BIOS level ( during booting) or from the OS level while the Server is running. Below procedures explains you both ways of verification

Verification of RAID Status connected to LSI Controller 

 

From Bios , during the booting

To enter into the Setup/Configuration utility for hardware RAID setup, you should press CTRL+C while the system performing it’s initial hardware loading after powering on. Jjust watch carefully for the related message to start the configuration utility.

Once you enter into the Setup Utility , chose the option for “RAID Properties”, you will see something like below

LSI Logic MPT Setup Utility v6.02.00.00 (2005.07.08)
View Array — LSI
Array 1 of 1
Identifier LSILOGICLogical Volume 3000
Type IM
Scan Order 2
Size(MB) 69618
Status Degraded
Manage Array
Scan Device Identifier RAID Hot Drive Pred Size
ID Disk Spr Status Fail (MB)
0 FUJITSU MAV2073RCSUN72G 0301 Yes No Primary No 69618
1 FUJITSU MAV2073RCSUN72G 0301 Yes No Failed No

Note that the “Status” of the array is “Degraded” and disk “ID” “1” has a “Drive Status” of “Failed”


Checking LSI RAID status from the Solaris Operating System ( Provided that the Raid manager installed already in the server)

 

Platforms using Solaris 10U4 or higher will produce similar to the following output when configured under LSI hardware RAID management:

Normal Output:

# /usr/sbin/raidctl -l

Controller: 0
Volume:c1t0d0
Disk: 0.0.0
Disk: 0.1.0

In addition to this, specifics for the “Volume” can be listed by executing the command as follows:

# /usr/sbin/raidctl -l c1t0d0

Volume Sub Size Stripe Status Cache RAID
Disk Size Level
—————————————————————-
c1t0d0 68.3G N/A OPTIMAL N/A RAID1
0.0.0 68.3G GOOD
0.1.0 68.3G GOOD

Sample output for a volume with failed disk:

# /usr/sbin/raidctl -l

Controller: 0
Volume:c1t0d0
Disk: 0.0.0
Disk: 0.1.0

In addition to this, specifics for the “Volume” can be listed by executing the command as follows:

# /usr/sbin/raidctl -l c1t0d0

Volume Sub Size Stripe Status Cache RAID
Disk Size Level
—————————————————————-
c1t0d0 68.3G N/A DEGRADED N/A RAID1
0.0.0 68.3G GOOD
0.1.0 68.3G FAILED

Note that the “Status” of the “Volume” “c1t0d0” is “DEGRADED” and the “Sub Disk” “0.1.0” has a “Status” of “FAILED”.

 

 

VERIFICATION of ADAPTEC / Sun STK based Controllers

From BIOS, during booting:

To verify disk failures within a RAID array on an Adaptec / Sun STK based controller, the user can view the Adaptec RAID BIOS at boot time

During the boot you will something below message

Adaptec RAID BIOS V5.3-0 [Build 16732]
Press for Adaptec RAID
Booting the Controller Kernel….Controller started
Controller #00: Sun STK RAID EXT at PCI Slot:02, Bus:04, Dev:00, Func:00
Waiting for Controller to Start….Controller started
Controller monitor V5.3-0 [16732], Controller kernel V5.3-0 [16732]
Battery Backup Unit Present
Controller POST operation successful
Controller Memory Size: 256 MB
Controller Serial Number: SOMESERIALNUM
Controller WWN: SOMEWWNNUMBER

One or more drives are either missing or not responding.
Please check if the drives are connected and powered on.
Press to accept the current configuration.
Press to enter Adaptec RAID Configuration Utility.
Press to Pause Configuration Messages.
(Default is not to accept if no valid key pressed in 30 second)
Timeout. BIOS took the default Configuration.

Location Model Rev# Speed Size
———————————————————-
J3 : Dev 00 ATA SEAGATE ST32500N 3AZQ 3.0G 238.4 GB
— : — No device

From above output you can notice the “Location” of the device “Dev 01” is missing and error appeared as “No Device” in model though the disk exists physically in the RAID.

Verifying Adaptec / SUN STK RAID status from the Solaris ( provided the you have StorMan package installed )

# /opt/StorMan/arcconf GETCONFIG 1
Controllers found: 1
———————————————————–
Controller information
———————————————————–
Controller Model : Sun STK RAID
Physical Slot : 3
Temperature : 73 C/ 163 F
Installed memory : 256 MB
Defunct disk drive count : 1
Logical devices/Failed/Degraded : 1/0/1
———————————————————————-
Logical device information
———————————————————————-
Logical device name : Array
RAID level : 1
Status of logical device : Degraded
Size : 238471 MB
Stripe-unit size : 256 KB
Protected by Hot-Spare : No
Bootable : Yes
Failed stripes : No
——————————————————–
Logical device segment information
——————————————————–
Segment 0 : Present (0,0) SOMESERIALNUM
Segment 1 : Inconsistent (0,1) SOMESERIALNUM
———————————————————————-
Physical Device information
———————————————————————-
Device #0
State : Online
Reported Channel,Device : 0,0
Reported Location : Connector 0, Device 0
Model : SEAGATE ST32500N
Size : 238471 MB
Device #1
State : Failed
Reported Channel,Device : 0,1
Reported Location : Connector 0, Device 1
Model : SEAGATE ST32500N
Size : 0 MB

Note: you can download and install StorMan utility from here .

Basic Troubleshooting, that an administrator can do for a failed disk status in a HARDWARE RAID, just before you request for an actual disk replacement.

1.Reinsert the suspect drive to check the drive is properly seated.
2.Check any internal flex cables to the disk backplane for loose connectivity or damage.
3.Recheck the LSI MPT BIOS Utility status.

RamdevKnowledge Article Shared by : Ramdev ( other posts by Ramdev )

I have started unixadminschool.com ( aka gurkulindia.com) in 2009 as my own personal reference blog, and later sometime i have realized that my leanings might be helpful for other unixadmins if I manage my knowledge-base in more user friendly format. And the result is today's' unixadminschool.com. Click on below web link to connect me at linkedin.


What is in your mind, about this post ? Leave a Reply

magad1


QUIZ Center


Closead1