NetApp Self-paced lab 1:

Basic NetApp Configuration and Administration

Exercise 1: Getting to know your Equipment Kit

Task 1: First, of all, we make sure we can connect to the lab, and that it is, in fact, running the Windows Server 2012


Then once this is done we go and SSh into the cluster through Putty, shown below is my first attempt. It froze and then kicked me out, assuming becuase of my Internet connection. So I came back half an hour later and resumed.

This slideshow requires JavaScript.

Task 2: In that time I had gone around to a friends place who had better Internet and I had no issues getting into the console. In there I went and checked the cluster health as shown below.

3; Cluster is healthy.PNG

The next thing I was to do was to change the time zone, this was done with first of all going into the timezone with the simple command “timezone” and then it could be changed with the simple command of “timezone US/Pacific” as shown below.4; Cluster TimeZonw Changed.PNG

As you can see below in the comparison the cluster is set to the same zone as the host.

5; TimeZone Comapason.PNG

Task 3: Once this was done we then looked up the licenses we have in the system with the Command “license show” as shown below.

6; Lincece Show.PNG

Task 4: Once the previous stages were complete we can move on to adding a DNS host record to our local domain. This is simply creating a new “A record” as you can see below. In this, I added the name Cluster1 and its IP address and created a PTR record as well. This was done alongside as a creation for my CentOS machine as shown below.

This slideshow requires JavaScript.

Then check to see if the DNS settings are correct.

8; Each is Pinable.PNG

Exercise 2: Working with Data ONTAP Administrative Interfaces

Task 1: First of all in the task was going to check the commands we are able to do, this is done by simply typing a “?” as shown below.

9; Directory.PNG

And then a simple “Q” is how to get out of it as shown below. If we type “cluster ?” then we can see all of the Subcommands. Once this was done I typed the “Cluster ?” to see variables as shown below after we quit out of it.

10; Getting out of the Cluster.PNG

Then I typed in “man” to see the manual or index for the lab we are doing. This can be continued by pressing the entire button. After this we can type “man cluster” and this will bring up the review of everything.

11; Manual.PNG

Once this was done we can type “network ?” and it will give us all of the commands for the network. Like ping, Interface, port, etc…we will then go (Interface show) and it will show us our interfaces or LIF (list of logical interfaces). This will show us what clusters there are, logical interface Status, Ip address, and ports.

Task 2: This was to go into the NetApp Oncomand System manager located on the desktop of the Sever 2012. We just click on it and once you are in the system manager we then need to add our Cluster1 into the manager as shown below.

12; Adding stuff.PNG

This is also setting the credentials so we can get into it, once this is done all we need to do is double-click it and then sign in using the credentials. This will take us to the page displayed below.

13; Signed in.PNG

Once on here we can see as displayed Clusters, Vservers, and nodes. This is where we will be doing the lab objectives as shown below. Once here we can go into nodes> Stroage, then configuration  in here we can click Aggregates and then disks.

Once this is done we can then check out the time and date by going into system tools and then date time. We can then edit the time once in there by clicking the edit button on the top left of the console.

14; Cheaking out the date and Time.PNG

After this, we clicked on the node level and expanded Cluster and had a wee look around. After we have done this we went deeper and expanded Storage and then Configuration. Once in here, we were able to click on aggregate to have a look a Cluster1’s Aggregate called Aggr0. as shown below.16; Arggreates.PNG

Once we have done this we went and clicked on Disks that was directly under Aggregates, this is so we can have a look at all the disks that are under the Cluster1 node. Inside of the configuration below where we were in storage, we then look at the Ports and Adapters. In this pane, under Ports and Adapters, we can see the FC and the FCoE adapters.

Task 3:  This is to configure the SNMP for the data on tap cluster. For this, we need to go back out to the system manager homepage and then refresh the page so it flicks over to the “no snmp response”. Once this shows up we log back into Cluster1 and into the interface where we will navigate through cluster > cluster1 >  configuration > System tools and then SNMP. Once in here, we will edit the SNMP settings as shown below. Once in there we will click the Edit button in the top left-hand corner and edit the community name as public.

15; Public.PNG


Exercise 3: Hardware basics and Managing Physical Components

Task 1: Hardwear basics and Managing Physical Components. Here we just need to stay in the same console as above and then travel down the Nodes as shown below and then click on Aggregates.

14; b

Once into the aggregates, we will then look down the bottom of the page for Volumes tab. After this is done we will go into the disk layout tab, that is directly beside that. In this, I will view all the information as shown below.

16; Arggreates.PNG

After this we will go back to the nodes pane and go to disks just under the aggregates there are disks. Once we select a disk we can then see its properties including whether it Zeroed its serial number etc.

After this, we can see the Ports/Adapters under configuration is the Nodes panel. Once in here we will look at the Ports, and take note of the physical Ports. We will look at both Port tabs, Ethernet ports and FC/FCoE Adapters.

Task 2: We will use putty to SSH into the Cluster and use the login, to cluster and check the health of the Cluster by using “cluster show ” as seen below. After this, we will use the command “Cluster Identity Show” to see the serial identity of the cluster and then finally “Storage ?” to see what commands we can use.  17; Storage.PNG

After this, we will do the “Disk Show” to see all of the disks inside of the nodes in this lab. Pressing enter will scroll down even further to see everything. While in here pressing “Q” will stop this process and entering “..” will take it down to a Directory, as shown below. While typing in the network interface it will show you all the information about a cluster, their nodes and what port it is using.

18; Interface show.PNG

Exercise 4: Creating and managing Aggregates

Task 1: Back in the NetApp interface, we will sign in again to our cluster and then go into aggregates under storage in the Cluster pane.  Once in here, we will follow through the wizard clicking on Aggr1 under the aggregate name. Then selecting the disks on the next page, we will select Cluster1-01, FCAL and we will reduce a 24 disk, count down to 16 disks. Save and close that and then move onto the next step which is to create the Aggregate as shown below.

This slideshow requires JavaScript.

Task 2: We will go back into putty after this and SSH into the Cluster, here we will type “storage aggregate create?” After this, I went and typed in the following to create an aggregate “storage aggregate create – aggregate aggr2 – nodes cluster1-02-disktype FCAL- diskcount 16 -raidtype raid_DP.” It is similar to what we have just done through the GUI, which is just the command for it. As you can see below I had issues but it still created it well enough.20; Create Arragate Issue;.PNG

Task 3: Once we have done this we are going to create a test aggregate by putting in the command “storage aggregate create -aggregate aggr_test -diskcount 5”. Once we have created this, we are going to show the new aggregate by using the command “storage Aggregate show”  and as you can see from below, this what we have just created.

21; Aggreage creation test.PNG

Then we are going to take it offline by putting in the command  “storage aggregate offline -aggregate aggr_test” and then deleting the aggregate afterward as shown below. We do this to have fewer errors and there might still be stuff connected if it’s still online when we delete.

21; b Deletion of the aggraite.PNG


From here we need to Zero the disks as shown below. While this is being done we will go back into the Netapp Interface and see this process is done.

22; Zerospares.PNG

Exercise 5: Setting up a storage Virtual Machine

Task 1,2,3,4 While we’re still in the Netapp interface we will create a Vserver called a SVML with the credentials provided by the lab. In this, we will create a LIF, CIF and an iSCSI protocol. as with the details given by the lab. 23; Creation of a thing.PNG

Below we can see what we have created and the different protocols we have made.

24; Creation of the Svml.PNG

Here we can see the network interfaces, this can also be seen when you putty into the cluster and Type in “network interfaces show”

25; Svml Interfaces.PNG

TASK 5: The creation of the Host records for the DNS for the SVML, is just a standard addition of an “A record” through the forward lookup zones. We will create the A record with the information given to us from the lab.

This slideshow requires JavaScript.

And below you can see that we can ping this new addition. We can also put in “ipconfig/ all” and that will show this added too.

28; pinging the new addtions.PNG


Exercise 6: Creating and Managing Volumes in an SVML

Task 1: Editing the SVML’s resource allocation, this is where we will use our aggregates  Aggr1 and Aggr2 for the delegation of the Volumes for the Vserver we have just made.

Task 2: Creation of a volume,  here we will create four, Vol1, Project x , Project y, and Teams as seen below.

This slideshow requires JavaScript.

Task 3: Here we are going to rearrange the Volumes and put Project y and z into teams as shown below. Here we need to first unmount the Volumes and then mount them in the correct place.

31; Mounting.PNG

Exercise 7: Sharing Volumes With Windows Clients

Task 1: While still in the NetApp console, we will then navigate to the SVML we have created and then we will create a share. As you can see below we created VOl1 shear.

32; Shears.PNG

Task 2: Here we will change the permissions on the Vol1 shear as you can see below. This has been changed from full control to read-only permissions, this means no Tom, Dick or Harry can do whatever they want in it. 33; Learn administrator.PNG

Task 3: Here is where we will map a network shear to the VOL1 shear we created. This is done by right clicking on computer and mapping a network drive.

This slideshow requires JavaScript.

Task 4 & Task 5: Creating a CIF shear. While back in the NetApp console, we need to create a new share for Project Y. This will only have “read-only” access for the contractors. The next thing to do is Map a shared disk to the read-only Shared volume, and then test the permissions by trying to create a new folder. As you can see below we don’t have permissions to do this and it won’t let us.

34; Access denied.PNG


Exercise 8: Exporting Volumes to Unix and Linux Clients

Task 1: Here is where we will check the protocols associated with cluster one. NFS, CIFS and iSCSI are the protocols that should be set up from a previous lab.

TASK 2:  Go into the Configuration in the Vserver Tab and then go into the Network interfaces and make sure there are the CIF LIF protocols there and that this has got the data protocol NFS as seen below.

25; Svml Interfaces.PNG

From here, we will then go into Putty and will ping the LIF we have just made. 35; Ping Linux.PNG

TASK 3:  Here will we will go into the SVML in the Netapp interface and make sure we have all the correct users, like PCUSER as seen below. 36; Unix users.PNG

After this, we will navigate to name mapping and we will add a new name map rule. For this, we will create two rules as seen below, one for UNIX to Windows and another from Windows to UNIX.37; Windows to unix, and Unix to windows.PNG

TASK 4: For this, we will add a rule for the export policies, creating one with CIFS and NFS for the CentOS machine as seen below.38; Mounting shit.PNG

After this, we will putty into the cluster and mount the shear. Then we will be able to access the share.39; Thins working.PNG

TASK 5: Create an Export policy. For this, it will only be for NFS, but I read it wrong and added CIFS as well. Since this lab is done and dusted I cant go back, but all it will take is going into edit the rule and uncheck the box.

This slideshow requires JavaScript.

TASK 6: Here we will create a volume,  then I will edit the volume to allow access. Then after this, we will then go into the namespace and change the export policy to the one we have just made. 42; SecurData.PNG

and now we will mount the volume we have just made in the cluster.

43; Mtesting the mount.PNG

This is the end of Lab One!


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s