Basic introduction and steps to setup NetApp Cluster-mode
Other Learning Articles that you may like to read
Free Courses We Offer
Paid Training Courses we Offer
Introduction to Clustered Data OnTAP:
Data ONTAP 8 merges the capabilities of Data ONTAP 7G and Data ONTAP GX into a single code base with two distinct operating modes: 7-Mode, which delivers capabilities equivalent to the Data ONTAP 7.3.x releases, and Cluster-Mode, which supports multicontroller configurations with a global namespace and clustered file system. As a result, Data ONTAP 8 allows you to scale up or scale out storage capacity and performance in whatever way makes the most sense for your business.
With Cluster-Mode the basic building blocks are the standard FAS or V-Series HA pairs with which you are already familiar (active-active configuration in which each controller is responsible for half the disks under normal operation and takes over the other controller’s workload in the event of a failure). Each controller in an HA pair is referred to as a cluster “node”; multiple HA pairs are joined together in a cluster using a dedicated 10 Gigabit Ethernet (10GbE) cluster interconnect. This interconnect is redundant for reliability purposes and is used for both cluster communication and data movement.
What does Scale-out storage means to you?
Scale-out storage is the most powerful and flexible way to respond to the inevitable data growth and data management challenges in today’s environments. Consider that all storage controllers have physical limits to their expandability—for example, number of CPUs, memory slots, and space for disk shelves—that dictate the maximum capacity and performance of which the controller is capable.
If more storage or performance capacity is needed, you might be able to upgrade or add CPUs and memory or install additional disk shelves, but ultimately the controller will be completely populated, with no further expansion possible. At this stage, the only option is to acquire one or more additional controllers.
Historically this has been achieved by simple “scale-up,” with two options: either replace the old controller with a complete technology refresh, or run the new controller side by side with the original. Both of these options have significant shortcomings and disadvantages.
With this basic introduction on NetApp Clustered OnTAP, I would like to walk through the basic steps on setting up the a Clustered OnTAP
Step: 1 Hardware setup
a. Connect controllers to disk shelves (FC connectivity)
b. NVRAM interconnect to high availability cable between partners (10GbE or infiniBand)
c. Connect controllers to network such that each node have exactly two connections to the dedicated cluster network, at least one data connection. Also the well known RLM connection for troubleshooting purposes when needed.
Note: Cluster connections must be on a network with dedicated cluster traffic, where as data and management connections are on a distinct network.
a. Power up network switches
b. Power up disk shelves
c. Power up storage controllers
a. During boot process press any key to enter the firmware
b. Two compact flash images: flash0a and flash0b are available. To ‘flash’ (put) a new image on primary flash one needs to configure management interface.
Note: For auto option of ifconfig, DHCP or BOOTP server must be available on management network. If it doesn’t one must run ifconfig addr= mask= gw=
c. Once the network is configured, ping to test and flash the image; run flash tftp://<tftp_server>/<path_to_image>flash0a
Step:4 Installing ONTAP 8.1
a. Run option 7 to install new software first
b. Enter a URL to ONTAP 8.1 tgz image
c. Allow the system to boot when complete
Note: One can type boot_primary if node stops at firmware prompt
Step:5 Initialize a Node
a. Run option 4
b. This initialization clears the three disks that the system uses for the first aggr that it creates and a vol0 root volume on it
c. This must be run on both nodes of each HA pair
Step:6 Cluster setup wizard
a. From boot menu, boot normally and login as “admin” with no password
b. The first node creates the cluster
c. The following information is required for the setup:
-Cluster network ports and MTU size
-Cluster base license key
-Cluster management port, IP address, netmask, and default gateway
-Node management port, IP adress, netmask, and default gateway
-DNS domain name
-IP address of DNS server
d. Subsequent nodes join the cluster
Step:7 Normal boot sequence
a. Firmware loads the kernel from CF
b. Kernel mounts “/” root image from rootfs.img on CF
c. Init is loaded and startup scripts run
d. NVRAM kernel modules gets loaded
e. Tmgwd is started
f. D-blade, N-balde and other components are loaded
g. vol0 root volume is mounted from local D-blade
h. CLI and element manager are ready for use
Step:8 Create a cluster
cluster create -license -clustername -mgmt-port -mgmt-ip -mgmt-netmask -mgmt-gateway -ipaddr1 -ipaddr2 -netmask -mtu 9000
Step:9 Join a cluster
Run this command from the node that wants to join the cluster
cluster join -clusteripaddr -ipaddr -ipaddr2 -netmask -mtu 9000
Note: One can add licenses in the cluster shell; system license add
a. NTP is disabled by default and needs manual set up of date, time and time zone
system date modify
b. Verify and monitor
system services ntp config show
system services ntp server show