Storage Efficiency Feature NetApp Cluster Mode

Storage Efficiency Feature NetApp Cluster Mode 


Storage Efficiency Feature NetApp Cluster Mode

What is Storage Efficiency


Storage efficiency features ensure that there is no wastage of storage capacity and all disk capacity is used in efficient manner without impacting performance. It lowers operation cost.

Storage Administrator has to manage it by enabling and monitoring various storage efficiency feature.

Types Of Storage Efficiency In NetApp Cluster Mode


There are four types Storage efficiency features available in NetApp cluster mode.


  • De-Duplication
  • Thin Provisioning
  • Compression
  • FlexClone

Storage Efficiency Feature De-Duplication


De-duplication reduces the amount of physical storage required for a volume by discarding duplicate blocks and updating the pointer to refer to a single block.

A common example where deduplication is useful is Mail attachment. Same mail attachment is stored by many user which consumes lot of space.

During normal data writing process each data block is assigned with a unique signature. When De-duplication process starts Data Ontap compares these signature and removes all data block which has same signature and keeps only one data block for reference.

Then Data Ontap updates the pointer to the single data blocks. In this way many data blocks is reclaimed and storage efficiency to store same data increases.

Netapp De-Duplication Storage Efficiency


Storage Efficiency Feature Thin-Provisioning


A thin-provisioned volume or LUN is one for which storage is not reserved in advance. Instead, storage is allocated dynamically, as it is needed. It means when user starts putting data in volume actual space is allocated to volume from its containing aggregate.

Free space is released back to the storage system when data in the volume or LUN is deleted.

Thin Provisioning support over provisioning. Example for 5 TB of aggregate you can allocate 7 TB of volumes. As not every user uses all storage that is assigned to them.

Storage Efficiency Feature Compression


Compression reduces the amount of physical storage required for a volume by combining data blocks in compression groups. Then each of these compression groups is stored as a single block.

There are two types of compression:


Inline Compression – Compression happens in memory before the data is written to disk.
Post Compression – Compression happens after the data is written to the disk.

Storage Efficiency Feature FlexClone


FlexClone technology references Snapshot metadata to create writable, point-in-time copies of a volume. Flexclone Copies share data blocks with their parents, hence it does not consumes any extra storage.

You can split the FlexClone to separate it from its parent volume. Once split operation is done the volume will start consumes extra space from aggregate. Watch this video to know more about Flexclone.

Below is a youtube video which explains about storage efficiency in netapp cluster mode. Subscribe To Channel for More Upcoming Videos.

         




Hardware Architecture NetApp Cluster Mode

Hardware Architecture NetApp Cluster Mode


In this video we will discuss about hardware architecture of NetApp Cluster Mode which includes Components of netapp cluster mode, how Netapp hardware are connected to each other and types of cluster mode and finally we will see how requests flows in netapp cluster mode.

If you are interested for SAN architecture the click here to read more about it.

hardware architecture of netapp cluster mode


Components Of NetApp Cluster Mode Architecture


There are three main components of netapp cluster mode Architecture which are described below.

Controllers Or Node Netapp Cluster Mode


Controller of node is the computing part of NetApp which process all incoming and outgoing request for netapp cluster. Netapp node has slots having IO/Module which has Ethernet, FC, FCoE and SAS Port. Ethernet ports connects to network switches for LAN connectivity. FC port connects to SAN switches which support FC protocol. You can use FCoE for both as Ethernet and FC ports.

Last SAS ports connects the controller to disk shelves. Node also contains main other small hardware such as NVRAM, CPU, PSU and Fans which has different purpose.

Disk Shelves NetApp Cluster Mode


Disk shelves in Netapp Cluster Mode consists of Physical disks. NetApp support various types of disk and has various models as well. Click the link for more details.

Disk shelves and controller are connected to each other via SAS ports. In a HA Pair the disk shelves are connected to both controller.

Inter Cluster Switches NetApp Cluster Mode


Inter cluster switches or cluster switches are network switches which provides connection between Controllers for data flows. Each node in cluster has minimum of two cluster port which connects to cluster switches.

What Is Cluster Mode NetApp


Cluster Mode NetApp is basically combination of multiple HA Pair Nodes which are interconnected via Inter Cluster Switches for data flow.

Cluster Mode Set Up Can be of three types. First once is Single Node Cluster, second one is Two Node Switchless Cluster and the last type is Two Node Netap Cluster with cluster switches.

In Netapp Cluster Mode the cluster with switches can have maximum of 24 nodes which means 12 HA Pairs.

Single Node Netapp Cluster


In Single node netapp cluster has only one node. Single node cluster does not provide and redundancy and always has risk of outage and data loss.

Single node netapp cluster

Two Node Switch less NetApp Cluster 


Two node switch less cluster has two nodes in HA pair. These nodes are connected to each other directly and not via inter cluster switches. The direct connection between two node for data flow is called as Cluster Interconnect.

Two node switch less netapp cluster

Two Node NetApp Cluster With Cluster Switch


In Two Node NetApp Cluster the nodes are connected to each other via Inter Cluster Switches for Data Flow.


Two node netapp cluster with cluster switches


Four Node NetApp Cluster Mode With Cluster Switch


Four node NetApp Cluster consists of four node means two HA pair. All Nodes are connected to two cluster switches. If you want to add another HA pair to cluster then first perform the physical connection of nodes to both cluster switches.

Four node netapp cluster with cluster switches



How Data Flows In NetApp Cluster Mode


First Read or write request comes to any one of the NetApp node. Data Ontap check if the data resides on local node or remote node.

If Data found in local node request is processed and sent back to client through same path. Now If Data does not present in local node and exists in remote node then request is sent to remote node via cluster network.

In remote node data is processed and the request is sent back to client through the same path.

Hope you gain some knowledge on hardware architecture of netapp cluster mode. For all list of blog on netapp cluster mode click here.
To watch a this blog on Video mode visit our YouTube Channel and Subscribe as well.

           

Port Channel Configuration Cisco MDS SAN Switch

In this blog we will discuss what is port channel in Cisco MDS switch. Along with we will also talk about F Channel, what are the types of port channel, Modes of port channel. Finally we will see the commands for creating a F port channel between a Cisco MDS core switch and NPV switch.

Port Channel Configuration Cisco MDS SAN Switch

What Is Port Channel In Cisco MDS SAN Switch


Port Channel is basically a Cisco SAN Switch Terminology. Port Channel is logical aggregation of Individual FC port or FC interfaces which increase total bandwidth, provides load balancing and link redundancy.

Port channel can connect interface between two different modules in same SAN switch. Port channel can not have interface from different SAN switches. This is more clear in below diagram.

Port Selection In Cisco MDS Port Channel

Above diagram has two switch switch1 and switch2. There are three port channel between the switches PortChannelA, PortChannelB and PortChannelC. In PortChannelA has aggregation of ports from same module in both switches while PortChannelB has ports from different modules.

Now in case of PortChannelC there are three port of from each switch. In switch1 two ports are from same module but third port is from different module. In switch2 all three ports are from same module.
The conclusion here is that you can select various port combination in same switch but ports cannot be selected from different switches.

What are the Types Of Port Channel


Types of port channel is defined based on the type of connectivity between devices. In general there are two types of port channel E port channel and F port channel.

E Port Channel is logical combination of Multiple E ports which Connects two SAN Switches for ISL connectivity. If we enable trunking then the port channel becomes TE port channel and the ISL link becomes EISL.

F Port Channel is Logical Combination of Multiple F ports which connects from one Switch (F) to NPV switch (NP) or FI interconnect. If trunking is enabled F port channel becomes TF port channel.

Modes Of Port Channel In Cisco MDS Switch


There are two types Port Channel Mode one is ON and other is ACTIVE. Both modes has different purpose. Below are some basic differences between them.

ON ACTIVE
Default You must explicitly configure this mode.
Need To Disable port while adding or modifying members to port channel When you add or modify a Port Channel automatically recovers.
Not Recommended Recommended
Only Supports E port channel Supports both F and E port channel

How To Configure F Port Channel In Cisco MDS Switch


This section we will discuss how to create a F Port Channel between Cisco MDS switch and NPV switch. Before we start we have to check few things which are very important for port channel configuration.

First thing we need to make sure that Physical connection is present between switches. Some of the configuration of individual ports and port channel must be same. Make sure Configuration like Speed, Mode, Rate mode, Port VSAN, Trunking mode and Allowed VSAN for individual port and port channel are same. If these are different then we may get error while creating port channel

Steps and Commands To Configure F Port Channel


Step 1. Enable F port trunking and channel protocol on the MDS core switch.


Cswitch(config)# feature fport-channel-trunk

Step 2 Enable NPIV on the MDS core switch:


Cswitch(config)# feature npiv

Step 3 Create the PortChannel and Add Ports on the MDS core switch


This command create a port channel with name 1.


Cswitch(config)# interface port-channel 1


This commands ensures the port channel type is F port channel


Cswitch(config-if)# switchport mode F


Below commands sets channel mode active. F port channel only support ACTIVE port channel mode hence it is important to set the channel mode as ACTIVE.

Cswitch(config-if)# channel mode active

Below Command disable the truncking in Port Channel

Cswitch(config-if)# switchport trunk mode off

Below command is use to set the rate mode in port channel

Cswitch(config-if)# switchport rate-mode shared
Cswitch(config-if)# exit

Till now we have completed creating a F port channel in Core Switch whose number is 1. Now in below steps we will Add Interface To the Port Channel 1. Make a note that the configuration for Port Channel 1 and the interfaces must be same.

Step 4. Configure the PortChannel member interfaces on the core switch


Below two commands selects ports fc2/1 , fc2/2 and fc2/3 as port channel members and then disables them
cswitch(config)# interface fc2/1-3
cswitch(config-if)# shut

Below four commands will set port mode as F, disables the trunking, set the speed as 4000 and the rate mode as shared.

cswitch(config-if)# switchport mode F
cswitch(config-if)# switchport trunk mode off
cswitch(config-if)# switchport speed 4000
cswitch(config-if)# switchport rate-mode shared

Below commands is for adding the interfaces to port channel. In this case ports fc2/1 , fc2/2 and fc2/3 added to Port Channel 1. The next command will disable all ports.

cswitch(config-if)# channel-group 1
cswitch(config-if)# no shut

So far we have completed the port channel configuration in Core switch. Now we have to repeat the same steps and commands to configure F Port channel in NPV switch.

Step 5 Create the PortChannel on the NPV switch


eswitch(config)# interface port-channel 1
eswitch(config-if)# switchport mode NP
eswitch(config-if)# switchport rate-mode shared
eswitch(config-if)# exit

Now we have to configure the individual ports and add them To Port Channel 1 in NPV switch.

Step 6 Configure the PortChannel member interfaces on the NPV switch:


eswitch(config)# interface fc2/1-3
eswitch(config-if)# shut
eswitch(config-if)# switchport mode NP
eswitch(config-if)# switchport speed 4000
eswitch(config-if)# switchport rate-mode shared
eswitch(config-if)# switchport trunk mode off
eswitch(config-if)# channel-group 1
eswitch(config-if)# no shut

Final step is to disable and enable all interfaces from both Core switch and  NPV switch.

Step 7 Set the administrative state of all the PortChannel member interfaces in both NPIV core switch and the NPV switch to ON:


Disable and enables interfaces in Core switch.

cswitch(config)# interface fc1/1-3
cswitch(config-if)# shut
cswitch(config-if)# no shut

Disable and enables interfaces in NPV Switch

eswitch(config)# interface fc2/1-3
eswitch(config-if)# shut
eswitch(config-if)# no shut

This completes the port channel configuration between Cisco MDS SAN switch and a NPV switch. Hope you have like this. Watch below YouTube for more details. Feel free comment on this topic and Subscribe For more such videos
                 
                   

Volume Rehost In Cluster Mode NetApp

How To Re-host Volume From One SVM To Another SVM Cluster Mode NetApp


In this Blog we will see what is Volume Rehost and how we can move or rehost volume from one vserver to another vserver in NetApp Cluster Mode. In later section we will also move a CIFS volume from one SVM to another SVM  via command line.

Volume rehost in netapp cluster mode

 

What is Volume Rehost in NetApp Cluster Mode


Volume rehost enables you to reassign NAS or SAN volumes from one storage virtual machine (SVM, formerly known as Vserver) to another SVM without requiring a SnapMirror copy.

Volume rehost is a disruptive operation for data access. Hence you should get a downtime from share owner and then plan it within that downtime window.

Volume Eligible For Rehost In NetApp Cluster Mode


The volume rehost procedures depend upon the protocol type and the volume type. Based on these the steps to perform volume rehost differs. Below is the brief summary for volume rehost for different scenario.

Rehosting CIFS volumes


You can rehost volumes that has CIFS shares. After rehosting the CIFS volume you must mount the volume and create a CIFS share in destination SVM. Volume rehost moves NTFS ACL so no need to worry about permission mismatch. To know how to create cifs share click the Link

Rehosting NFS volumes


You can rehost volumes that serve data over NFS protocol. After rehosting the NFS volumes, you must mount the volume create export policy and permission rules in destination SVM. Click the Link for more details on creating export-policy and rules.

Rehosting SAN volumes


You can rehost volumes that have mapped LUNs. After re-creating the initiator group (igroup) in the destination SVM, volume rehost can automatically remap the volume on the same SVM.

Rehosting volumes in a SnapMirror relationship


You can rehost volumes in a SnapMirror relationship. Before rehosting you should delete the snap-mirror relationship. After volume rehost you must create peer relationship between source and destiantion SVM. You should also create new snap-mirror relationship and schedule it for Sync.


Pre Check For Volume Rehost From In NetApp Cluster Mode


Volume must be online. Offline volume cannot be rehosted Volume management operations, such as volume move or LUN move, must not be running. Data access to the volume must be stopped. Destination SVM Configuration must match with Source SVM.

Post Steps Of Volume Rehost In NetApp Cluster Mode


After the rehost operation volume will lost Some configuration such as Antivirus policies, Volume efficiency policy, QOS, Snapshot policies , Quota rules, export policy and rules, User and group IDs. If any of the above feature was there for rehosting volume them you must create these in destination SVM as well.

You also need to mount the volume and the create new share. Good thing is that NTFS permission will be the same as it was in source SVM. Last thing is that you have to use destination SVM IP to access the shares.

Steps And Commands For Volume Rehost In NetApp Cluster Mode

  1. Record information about the CIFS shares to avoid losing information on CIFS shares in case volume rehost operation fails.
  2. Unmount the volume In Source SVM:volume unmount -vserver vs1 -volume vol1
  3. Switch to the advanced privilege level:set -privilege advanced
  4. Rehost the volume on the destination SVM:volume rehost -vserver vs1 -volume vol1 -destination-vserver vs2
  5. Mount the volume under the appropriate junction path in the destination SVM:volume mount -vserver vs1 -volume vol1 -junction-path /vol1
  6. Create CIFS shares for the rehosted volume:vserver cifs share create -share-name share1 -vserver vs1 -path /vol1
  7. Point the DNS Host entry to IP of New SVM.

Checkout the YouTube Video on Volume Rehost In Cluster Mode NetApp. This videos has live demo for volume rehost. Subscribe to Channel for more such videos.

         




Details Explanation Of SAN Storage Architecture

Details Explanation Of SAN Storage Architecture


This blog explains What is SAN storage how SAN storage works. What are the SAN Storage components and how they connects to each other.

Details Explanation Of SAN Storage Architecture


What is SAN Storage ?


SAN storage stands for Storage Area Network. A SAN is a specialized high‐speed network of storage devices and SAN switches connected to computer systems called as Host or Server via Fiber optic cables. Host uses the storage from storage arrays as if its local to Host. As the data communication happens via fiber optics the data speed is high in comparison to NAS storage.

Components Of SAN Storage 


SAN Components can be divided into 3 layer. Host, Fabric and Storage Layer. These three layers connects to each other and form an Storage Area Network or SAN network. Each layer has its own functions. We will discuss about their function in coming section.

Below is a very simple architecture diagram of SAN storage which has all of Storage Area Network.

                           basic architecture of SAN storage

Host Layer Of SAN Storage


Host layer of SAN Storage has servers or host. Hosts are basically a hardware having its own resources like CPU, Memory, Disk storage and an Operating system running on it. Along with these it has also dual HBA card which connects to the Fabric Layer of SAN Network.

What is Fabric In SAN Storage


Fabric layer or a Fabric of SAN storage consist of one or more SAN switch. SAN Switch provides a connections points in SAN architecture. All Host connects to SAN switch and Storage Array also connects to same SAN switch. Thus it act as mediator between Host and Storage.

Typical SAN network has two fabric for high redundancy. In case if one Fabric goes down then other fabric takes care of the data flows in storage area network.

Storage Layer in Storage Area Network


In storage layer of Storage Area Network storage arrays are present where data resides in physical disks. Storage Array consist of physical disk. These disks are grouped logically using special technology called RAID for data protection. These RAID helps in data recovery whenever a disk failure occurs. There various types of RAID type. Depending on requirement specific RAID type is selected for creating raid group. Common RAID type are RAID 5 and RAID 6.

Most of storage arrays has Dual storage processors (SPs) which are the front end of the storage array. These SP has front end IO module which has FC ports. These FC ports connects to SAN switch via FC cables.

Below is an architecture diagram of typical SAN storage which has all components of Storage Area Network with redundant setup.

Image of Storage Area Network SAN Storage


How SAN Storage Works 


When a host from a host layer wants to access a storage device on the SAN, it sends out a block‐based access request for the storage device. SCSI commands are encapsulated into FC packets. Then the Host HBA transmits the FC request to the Storage layer via Fabric.

From Fabric layer SAN Switch then sends this request to Storage Array. In Storage Array SP receives the request, process it and then sends an acknowledgement back to the host through same path.

For more detail explanation on this topic please watch this YouTube Video. For more such videos on SAN and NAS technology Subscribe to the channel.

             

What Is And How NPV Works SAN Switch

What Is And How NPV Works SAN Switch

In this blog we will discuss what is NPV and how NPV works in SAN switch. NPV sounds similar to NPIV but they are different in terms of their use. Click the link to check what is NPIV and how NPIV works in SAN switch. For all tutorial under SAN switch click here.

what is npv and how npv works in san switch

What Is NPV In SAN Switch


What is NPV In SAN Switch ? NPV stands for N_Port Virtualization. This is SAN Switch level features. NPV enabled switch act as a proxy switch and bypass all FC services request to its up-link switch. NPV is a Cisco Specific term other vendor calls it as different name. In terms of Brocade its called as Access Gateway Switch.


How NPV Works In SAN Switch


In order to understand how NPV works we should understand what happens when a new switch connects to fabric. In usual case when a SAN switch is connected to a fabric its gets Domain ID and participate in Fabric Services like assigning FCID to initiator.

Now in case of NPV if the SAN switch is NPV enabled then it will not have any Domain ID and does not participate any FC services. Whatever Device connected to NPV enabled switch such as host , switch or UCS or any device gets FC services from its up-link switch.

what is and how npv works in san switch


Use Of NPV SAN Switch Network


There are various ways to use NPV in SAN switch. Below are some examples where we can use NPV. One of the rare use of NPV where fabric requires more then 239 switches or there is a limitation of Domain ID. For each switch in a fabric it requires a domain ID. Now as the fabric grows number of SAN switch also grows and as a result Domain ID also grows. If the number of switches reaches to 239 domain id also reaches to 239. In such scenario if more switches requires then NPV is a solution to add more switches to fabric.

Another practical use of NPV is multi vendor SAN switches in same fabric. If you have a SAN fabric with switches of multiple vendor then NPV is necessary. For example you can connect a Cisco Core switch with brocade SAN switch. Note both switches must be interoperability mode.

You can also use NPV for migrating SAN switches from one vendor to another. We can use NPV as pre deployment stage for SAN switch migration from one vendor switch to another. For more details you can visit the below link.

Below is an YouTube videos which have details explanation of NPV features and its uses. Subscribe the channel for more such videos on SAN switch training.


       

What Is And How NPIV Works In SAN Switch

What is NPIV In SAN Switch

NPIV is SAN switch port level virtualization feature. NPIV stands for N-Port ID Virtualization. NPIV allows multiple initiator to logged in to same single physical target port. NPIV is mostly used SAN network where ESX Host, Blade Server and UCS deployment are present.

What Is And How NPIV Works Basics Of SAN Switch Part 5

How NPIV Works In SAN Switch

In order to understand how NPIV works in SAN switch first lets understand how a SAN connectivity works without NPIV. In usual case one initiator or host is connected to single port of SAN switch.
During FLOGI process the switch assign a FCID to initiator and the initiator is logged in to switch port.

Below is one diagram which has simple connectivity of two server to SAN switch. where two server use two ports of SAN switch to logged in.

Basic Connection Between Server and SAN Switch


Now in case of NPIV multiple initiator tries to login to same single physical port of SAN switch. If SAN switch port is NPIV enabled then switch port accepts request from each initiator and assign a FCID to each initiator. Once all initiator receives a FCID all of them are logged to to same physical port.

Below diagram explains the connectivity and uses of NPIV in SAN switch network. Here compute system which is similar to blade server of ESX host which has single physical HBA which connects to single SAN switch port. In back-end the compute system has multiple nodes which shares same common physical HBA. Each node of compute system has a virtual HBA having virtual WWPN.

During FLOGI Process each virtual WWPN send request to same SAN switch port and if the SAN switch port is NPIV enabled it accepts all request and assign FCID to each node or initiator. This way multiple initiator logged in same SAN switch port.

What Is And How NPIV Works In SAN Switch

Where We Use NPIV In SAN switch

There are various uses of NPIV in SAN network but below are most common use of NPIV.
If your environment has ESX, UCS, or Blade Server Deployment then definitely your SAN switch ports are NPIV enabled.

Second common use of NPIV is that it allows masking of LUN to individuals VMs. It means if you have multiple VMs in ESX host then you can do zoning between the VMs WWPN and Storage array. Then you can share individual LUNs to particular VM.

Third common use is that it reduce messy cable connectivity of SAN network. If you have large number of server then lot of cable connection is require between server and switch which makes difficult for Data Center admin to perform physical maintenance activity. As NPIV reduce the number of connection between server and switch physical management of Data center make simpler.
Below is one of the videos which explains the purpose of NPIV and how it works. Subscribe to YouTube Channel for such videos.      

                         

Zoning In Cisco MDS SAN Switch In Command Line

Zoning In Cisco MDS SAN Switch In Command Line Zoning is a process of grouping initiator and target ports WWPN which is performed in SAN ...