Lenovo DM StudentGuide [PDF]

  • Author / Uploaded
  • Lucas
  • 0 0 0
  • Suka dengan makalah ini dan mengunduhnya? Anda bisa menerbitkan file PDF Anda sendiri secara online secara gratis dalam beberapa menit saja! Sign Up
File loading please wait...
Citation preview

Lenovo DM Storage – Technical Workshop – Student Lesson Guide



Lenovo DM-series Storage Technical Workshop



ERC 1.3



Lenovo Accredited Learning Provider



1



Lenovo DM Storage – Technical Workshop – Student Lesson Guide



Course Agenda - DM Series Storage - Theory • Lenovo DM Series Storage • ONTAP Cluster, Management, Configuration • ONTAP Networking • ONTAP Architecture • Data Access NAS / SAN • Snapshots • Storage Efficiency – Deduplication, Compression, Compaction • Disaster Recovery & Continuous Availability – SnapMirror, SnapVault, MetroCluster • Additional ONTAP features – SnapLock, Quality of Service, FabricPool • Cluster Maintenance



Lenovo Accredited Learning Provider



Lenovo Accredited Learning Provider



2



2



Lenovo DM Storage – Technical Workshop – Student Lesson Guide



Course Agenda – DM Series Storage - Practice – Exercise 0: Create ONTAP Cluster – Exercise 1: Checking the Lab environment – Exercise 2: Exploring ONTAP Management interfaces – CLI & GUI – Exercise 3: Managing Physical and Logical Network Resources – Exercise 4: Create Storage Virtual Machine – Exercise 5: SAN configuration - FC – Exercise 6: SAN configuration - iSCSI – Exercise 7: NAS configuration - NFS – Exercise 8: NAS configuration - CIFS – Exercise 9: Managing Snapshot Copies – Exercise 10: Managing Storage Efficiency – Exercise 11: Managing FlexClone Volumes Lenovo Accredited Learning Provider



Lenovo Accredited Learning Provider



3



3



Lenovo DM Storage – Technical Workshop – Student Lesson Guide



Lenovo DM Series Storage



Lenovo Accredited Learning Provider



4



Lenovo DM Storage – Technical Workshop – Student Lesson Guide



DM Series Unified Offering Overview UNIFIED



All Flash  Maximum performance



A400



 Latency sensitive apps



A300



DM7100H/F (NVMe)



 Mission critical workload FAS8200



DM7000F



Hybrid



A220



 Cost effective  Mixed workloads  Secondary sites



DM7000H FAS2750



DM5000F



DM5000H FAS2720



DM3000H Blue = NetApp Model



Lenovo Accredited Learning Provider



PERFORMANCE



5



DM Series models are DM3000H, DM5000H, DM5000F, DM7000H, DM7000F, DM7100H and DM7100F.



Lenovo Accredited Learning Provider



5



Lenovo DM Storage – Technical Workshop – Student Lesson Guide



Introduction to DM3000H and DM5000H/F • • •



2U 12 x 3,5“ HDDs (DM3000H), 2U 24 x 2,5“ HDDs (DM5000H/F) CPU cores: 24 Memory: 64GB – 8GB is used as NVMEM



 Onboard NVMe M.2 Flash Cache™ (Hybrid models)  Onboard I/O ports (per controller)  2x 10GbE: Cluster interconnect  4x UTA2 (16Gb FC or 10GbE): Host connectivity  Also configurable as 1GbE



or  4x 10GBASE-T: Host connectivity (Hybrid)  Will auto-negotiate to 1GbE



 2x 12Gb mini-HD SAS: External storage attachment • DM120S and DM240S



 Persistent write log  NVlog written to flash if unplanned power loss occur



Lenovo Accredited Learning Provider



Lenovo Accredited Learning Provider



6



6



Lenovo DM Storage – Technical Workshop – Student Lesson Guide



ThinkSystem DM3000H/5000H/F Front View



2U ThinkSystem DM7000H/F Front View



3U



Lenovo Accredited Learning Provider



Lenovo Accredited Learning Provider



7



7



Lenovo DM Storage – Technical Workshop – Student Lesson Guide



Introducing the DM7000H/F • • •



3U – No HDD CPU cores: 32 Memory: 256GB – 16GB used for NVMEM











Onboard 2TB NVMe M.2 Flash Cache™ (Hybrid models) per controller Onboard I/O ports (per controller) – 2x 10GbE: Cluster Interconnect – 4x UTA2 (16Gb FC or 10GbE): Host connectivity • Can also be configured to also support 1GbE



– 4x mini-SAS HD: External storage attachment • DM240S Shelf supported



– 2x 10GBase-T: Host connectivity – 2x PCIe slots







Persistent write log – NVlog written to flash if unplanned power loss occurs



Lenovo Accredited Learning Provider



8



DM7000H Lenovo ThinkSystem DM7000H is a scalable, unified, hybrid storage system that is designed to provide high performance, simplicity, capacity, security, and high availability for large enterprises. Powered by the ONTAP software, ThinkSystem DM7000H delivers enterprise-class storage management capabilities with a wide choice of host connectivity options, flexible drive configurations, and enhanced data management features. The ThinkSystem DM7000H is a perfect fit for a wide range of enterprise workloads, including big data and analytics, artificial intelligence, engineering and design, hybrid clouds, and other storage I/O-intensive applications. ThinkSystem DM7000H models are 3U rack-mount controller enclosures that include two controllers, and 256 GB RAM and 16 GB battery-backed NVRAM (128 GB RAM and 8 GB NVRAM per controller). Universal 1/10 GbE NAS/iSCSI or 4/8/16 Gb Fibre Channel (FC) ports and 1/10 GbE RJ-45 ports provide base host connectivity, with an option for additional 1/10 GbE, 25 GbE, or 40 GbE NAS/iSCSI, or 8/16/32 Gb FC connections with the adapter cards. A single ThinkSystem DM7000H Storage Array scales up to 480 drives with the attachment of Lenovo ThinkSystem DM240S 2U24 SFF, DM120S 2U12, and DM600S 4U60 LFF Expansion Enclosures. It also offers flexible drive configurations with the choice of 2.5-inch (SFF) and 3.5-inch (LFF) form factors, 10 K rpm SAS and 7.2 K rpm NL SAS hard disk drives (HDDs), and SAS solid-state drives (SSDs).



Lenovo Accredited Learning Provider



8



Lenovo DM Storage – Technical Workshop – Student Lesson Guide



Introducing the DM7100H/F  4U – No HDD  CPU cores: 40  Memory: 256GB + 32GB used for NVMEM



 Onboard 4TB NVMe M.2 Flash Cache™ (Hybrid models) per controller  Onboard I/O ports (per controller) • 2x QSFP28 (100GbE RoCE): NVMe storage attachment • 2x SFP28 (25GbE RoCE): HA • 4x mini-SAS HD: External storage attachment • Host access • Ethernet configuration:4x SFP28 (25/100GbE ) • Unified configuration: 4x SFP+ (16/32Gb FC)



• 5x PCIe slots



• Expansion: • Supports 480 SSDs • DM240N, max 2 enclosures (48 NVMe drives) • DM240S Lenovo Accredited Learning Provider



9



DM7100H Lenovo ThinkSystem DM7100H is a scalable, unified, hybrid storage system that is designed to provide high performance, simplicity, capacity, security, and high availability for large enterprises. Powered by the ONTAP software, ThinkSystem DM7100H delivers enterprise-class storage management capabilities with a wide choice of host connectivity options, flexible drive configurations, and enhanced data management features. The ThinkSystem DM7100H is a perfect fit for a wide range of enterprise workloads, including big data and analytics, artificial intelligence, engineering and design, hybrid clouds, and other storage I/O-intensive applications. A single ThinkSystem DM7100H Storage Array consists of the 4U rack-mount controller enclosure and one or more expansion enclosures. The controller enclosure includes two controllers, 256 GB RAM (128 GB RAM per controller), and 32 GB battery-backed NVRAM (16 GB NVRAM per controller). 25 GbE SFP28 NAS/iSCSI or 4/8/16 Gb Fibre Channel (FC) ports on the controller's mezzanine cards provide base host connectivity, with adapter card options for additional 1/10 GbE, 25 GbE, or 40/100 GbE NAS/iSCSI, or 8/16/32 Gb FC connections. The attachment of the Lenovo ThinkSystem DM240S 2U24 SFF, DM120S 2U12 LFF, and DM600S 4U60 LFF Expansion Enclosures to the controller enclosure provides scalability up to 720 drives and flexible drive configurations with the choice of 2.5-inch (SFF) and 3.5-inch (LFF) form factors, 10K rpm SAS and 7.2K rpm NL SAS hard disk drives (HDDs), and SAS solid-state drives (SSDs). 9 Lenovo Accredited Learning Provider



Lenovo DM Storage – Technical Workshop – Student Lesson Guide



ThinkSystem DM3000H/5000H/F 2 x 12Gb SAS ports



2 x 10GbE ports for cluster interconnect or data networking



Status LEDs for controller and non-volatile memory status



Serial Console Ports for configuration and management



Four Unified Target Adapter 2 ports per controller operate as 8Gb/16Gb FC or GbE/10GbE Private Management port for remote management



Dual, hot-plug power supplies are 80 Plus Platinum rated and provide enclosure and temperature monitoring. Lenovo Accredited Learning Provider



10



1st initial setup is done trough RS-232



Lenovo Accredited Learning Provider



10



Lenovo DM Storage – Technical Workshop – Student Lesson Guide



I/O Per Controller Unified Configuration SAS



Management



Micro USB



10GbE



Console



UTA2



Ethernet Configuration



Lenovo Accredited Learning Provider



USB



Management SAS



Micro USB



10GbE



10GBASE-T (RJ45)



Console



USB



11



0a 0b – SAS ports 1 and 2 e0a e0b – on board 10Gb ethernet for CLUSTER E0c e0d e0e e0f – Module 4 x Ethernet 10Gb E0c/0c … – Module : Ethernet SFP+ 10Gb / 8Gb/16Gb FC 0(a,b,c…) – onboard 1(a,b,c…) PCI card



Lenovo Accredited Learning Provider



11



Lenovo DM Storage – Technical Workshop – Student Lesson Guide



ThinkSystem DM7000H / F 4 x 12Gb SAS ports per controller for disk shelf connectivity.



2 x 10GbE ports per controller for use as cluster interconnect or data network ports.



4 x Unified Target Adapter 2 ports per controller operate as 8Gb/16Gb FC, GbE/10GbE, or FCVI ports, or cluster interconnect ports.



10G Base-T ports per controller for highspeed host (NFS, SMB, iSCSI) connectivity or cluster interconnect ports.



Lenovo Accredited Learning Provider



Lenovo Accredited Learning Provider



Serial Console Ports For configuration and troubleshooting



Status LEDs for controller and nonvolatile memory status



Redundant Components: Hot-swappable controllers, cooling fans, power supplies and optics.



40GbE and 32Gb FC Two high-performance PCIe Gen 3 slots per controller to accommodate additional I/O connectivity including 40GbE and 32Gb FC. 12



12



Lenovo DM Storage – Technical Workshop – Student Lesson Guide



DM7100 Hardware Overview   



4U enclosure NVMe FlashCache on Hybrid only Available in two possible configurations  



Ethernet: 4x 25Gb Ethernet (SFP28) ports Fiber Channel: 4x 16Gb FC (SFP+) ports



Lenovo Accredited Learning Provider



13



The rear of the ThinkSystem DM7100F 4U controller enclosure includes the following components: Two redundant hot-swap controllers, each with the following ports: Two 25 GbE SFP28 ports for direct-attach HA pair interconnect. Two 40/100 GbE QSFP28 onboard ports for connections to the NVMe expansion enclosures. Two 40/100 GbE QSFP28 ports on the SmartIO adapter in Slot 3 for direct-attach or switched cluster interconnect. Four 12 Gb SAS x4 ports (Mini-SAS HD SFF-8644) for connections to the SAS expansion enclosures. A mezzanine slot for one of the following mezzanine cards (a mezzanine card is required): Four 25 GbE SFP28 host ports (NAS or iSCSI). Four 4/8/16 Gb FC SFP+ host ports (FC only). Four slots for the following optional adapter cards (ports per adapter card): Host ports: Two 1/10 GbE RJ-45 host ports (NAS or iSCSI). Four 10 GbE SFP+ host ports (NAS or iSCSI). Two 25 GbE SFP28 host ports (NAS or iSCSI). Two 40/100 GbE QSFP28 host ports (NAS or iSCSI). Four 8/16/32 Gb FC SFP+ host ports (FC or NVMe/FC). Expansion ports: Two 100 GbE QSFP28 expansion ports (NVMe/RoCE). Four 12 Gb SAS x4 expansion ports (Mini-SAS HD SFF-8644). MetroCluster ports: Two 4/8/16 Gb FC SFP+ MetroCluster FC ports (planned for the Lenovo Accredited Learning Provider



13



Lenovo DM Storage – Technical Workshop – Student Lesson Guide future). One RJ-45 10/100/1000 Mb Ethernet port for out-of-band management. Two serial console ports (RJ-45 and Micro-USB) for another means to configure the system. One USB Type A port (read-only) for software updates. Four redundant hot-swap 1600 W (100 - 240 V) AC power supplies (IEC 320-C14 power connector) with integrated cooling fans. Controller status LEDs.



Lenovo Accredited Learning Provider



13



Lenovo DM Storage – Technical Workshop – Student Lesson Guide



DM Series Enclosures Form factor Max # Drives



Size



DM3000H



DM5000H / F



DM7000H / F



DM7100H / F



2U12



2U24



3U0



4U0



144



144



480



720



DM120S



DM240S



DM600S



DM240N



2U



2U



4U



2U



IO Connectivity



12Gb SAS



100Gb Eth



Max Drives



12



24



60



24



Drive Form Factor



3.5”



2.5”



3.5”



2.5“



4TB 7K HDD 8TB 7K HDD 10TB 7K HDD



960GB SSD 3.84TB SSD 7.68TB SSD 15.36TB SSD 900GB 10K HDD 1.2TB 10K HDD 1.8TB 10K HDD



4TB 7K HDD 8TB 7K HDD 10TB 7K HDD



960GB NVMe SSD 3.84TB NVMe SSD 7.68TB NVMe SSD 15.36TB NVMe SSD



Drives Supported



2016 Lenovo Internal. 2019 Lenovo. All rightsAll reserved. rights reserved.



14



DM3000 System comes in a 2U12 form factor supporting 12 3.5“ drives internally. Maximum configuration is up to 144 drives. DM5000 System comes in a 2U24 form factor supporting 24 2.5“ drives internally. Maximum configuration is up to 144 drives. DM7000 System comes in a 3U0 form factor without support for any internal drives. Maximum configuration is up to 480 drives in Hybrid and up to 384 drives in All Flash. DM7100 System comes in a 4U0 form factor without support for any internal drives. Maximum configuration is up to 720 drives in Hybrid and up to 480 SSD drives or 432 SSD + 48 NVMe drives in All Flash.



14 Lenovo Accredited Learning Provider



Lenovo DM Storage – Technical Workshop – Student Lesson Guide



DM Storage Expensions - Overview



Lenovo Accredited Learning Provider



15



The rear of the ThinkSystem DM600S 4U LFF expansion enclosure includes the following components: Two redundant hot-swap I/O Modules; each I/O Module provides four 12 Gb SAS x4 expansion ports (Mini-SAS HD SFF-8644) for connections to the controller enclosures and for connecting the expansion enclosures between each other. Two redundant hot-swap 2325 W (200 - 240 V) AC power supplies (IEC 320-C20 power connector) Two hot-swap cooling fan modules; each module has two fans. Note: The failed cooling module should be replaced as soon as possible. I/O Module status LEDs.



Lenovo Accredited Learning Provider



15



Lenovo DM Storage – Technical Workshop – Student Lesson Guide



    



DM240N Overview



4x 100GbE ports A maximum of two DM240N shelves is supported per array 24 NVMe SSD drive bays Each shelf orderable with 12, 18, or 24 NVMe SSDs Standard NVMe SSDs are self-encrypting drives (SED)  



More information on NVMe SED SSDs on the following slide 1.9TB and 3.8TB NVMe non-SED SSD available for restricted countries



NSM* NSM* * NVMe Shelf Module PSUs



LEDs



100GbE ports



Lenovo Accredited Learning Provider



16



The front of the ThinkSystem DM240N 2U SFF NVMe expansion enclosure includes the following components: - 24 SFF hot-swap drive bays. - Enclosure status LEDs. - Enclosure ID LED. The rear of the ThinkSystem DM240N 2U SFF NVMe expansion enclosure includes the following components: - Two redundant hot-swap NVMe I/O Modules; each NVMe I/O Module provides two 100 GbE QSFP28 expansion ports for connections to the controller enclosures. - Two redundant hot-swap 1600 W (100 - 240 V) AC power supplies (IEC 320-C14 power connector) with integrated cooling fans. - NVMe I/O Module status LEDs.



16 Lenovo Accredited Learning Provider



Lenovo DM Storage – Technical Workshop – Student Lesson Guide



Cluster Interconnects  Default cluster interconnects are 10GbE ports  e0a & e0b



 Two Node Switchless Cluster (TNSC) configurations  10GbE ports (e0a & e0b) for interconnect cluster connectivity  DM3000H and DM5000H/F have physically dedicated ports



 Switch for most cluster deployments > 2 nodes – max 16 nodes



10GbE Cluster Interconnect ports



Lenovo Accredited Learning Provider



Lenovo Accredited Learning Provider



17



17



Lenovo DM Storage – Technical Workshop – Student Lesson Guide



DM Series Cabling 2-node DM Cluster cabling 10 GbE SFP+ - DAC cable 4 or more node DM Cluster cabling Ethernet switch required If no external storage – expansions best practice is to connect SAS ports between controllers



Lenovo Accredited Learning Provider



Lenovo Accredited Learning Provider



18



18



Lenovo DM Storage – Technical Workshop – Student Lesson Guide



DM Series Expansion Cabling



Lenovo Accredited Learning Provider



19



This cabling is called “top-down & bottom up” It provides FT/HA because of two chains – blue and yellow



Lenovo Accredited Learning Provider



19



Lenovo DM Storage – Technical Workshop – Student Lesson Guide



Host - Unified Target Adapter 2 (UTA2) Ports  Four onboard UTA2 ports on DM3000H/5000H Series systems in Unified configuration  Field configurable as FC or CNA (CNA is for Ethernet)  Supports 16Gbps, 8Gbps, and 4Gbps speeds with 16Gbps FC SFP+  10GbE and 1GbE are fixed speed and do not auto-range  Supports FC, iSCSI and NAS protocols



 UTA2 ASICs are dual-port and both ports share the same personality: FC or Ethernet (CNA)  Two steps needed to set port personality in the field  Choose either FC or CNA personality per ASIC  ONTAP 9 set port command: system node hardware unified-connect modify



 Install appropriate SFP+ or Twinax cable when using CNA personality



 When set to FC mode, both ports on ASIC can be set to either target or initiator  A controller reboot is required to change Fiber Channel port type



 When set to CNA mode, 1GbE and 10GbE SFPs can be used together on same ASIC Lenovo Accredited Learning Provider



20



CNA – Converged network adapter



Lenovo Accredited Learning Provider



20



Lenovo DM Storage – Technical Workshop – Student Lesson Guide



LSST - Lenovo Storage Sizing Tool



Lenovo Accredited Learning Provider



21



Lenovo DM Storage – Technical Workshop – Student Lesson Guide



Lenovo Storage Sizing Tool - LSST LSST – Lenovo Storage Sizing Tool Landing page:



https://storagesizingtool.lenovo.com Login via LENOVO Business Partner Portal There are several links: - Lenovo Storage Sizing Tool (LSST) - for DM Storage - Lenovo Storage Sizing Tool (LSST) - for DE Storage For DM storage, there are two possibilities: - Size and Recommend - Design a system manually Lenovo Accredited Learning Provider



Lenovo Accredited Learning Provider



22



22



Lenovo DM Storage – Technical Workshop – Student Lesson Guide



Lenovo Storage Sizing Tool - LSST



Lenovo Accredited Learning Provider



Lenovo Accredited Learning Provider



23



23



Lenovo DM Storage – Technical Workshop – Student Lesson Guide



Lenovo Storage Sizing Tool - LSST



Separate course available Next: 17.06.2020 10:00 -13:00 CET Lenovo Accredited Learning Provider



Lenovo Accredited Learning Provider



24



24



Lenovo DM Storage – Technical Workshop – Student Lesson Guide



ONTAP Cluster



Lenovo Accredited Learning Provider



25



Lenovo DM Storage – Technical Workshop – Student Lesson Guide



Unified Storage and Multiprotocol Support



All Flash DM



Hybrid DM



Supported protocols: FC, iSCSI, CIFS and SMB, FCoE, and NFS and parallel NFS (pNFS)



Unified Management SAN and NAS workloads



Built-in storage efficiency



Nondisruptive operations



Scale-out and scaleup expansion



Advanced application integration



Secure, multitenant management with QoS



Integrated data protection



Lenovo Accredited Learning Provider



26



The ONTAP multiprotocol unified architecture supports multiple data access protocols concurrently on the same storage system over a range of controller and disk types. Supported protocols include FC, iSCSI, FCoE, SMB (Samba protocol) = CIFS (Common Internet File System), and NFS (network File System), including parallel NFS (pNFS). Data replication and storage efficiency features are supported across all protocols in ONTAP software.



Lenovo Accredited Learning Provider



26



Lenovo DM Storage – Technical Workshop – Student Lesson Guide



HA Pairs High-availability (HA) pairs provide hardware redundancy that supports the following features: • Nondisruptive operations (NDO) and nondisruptive upgrade (NDU) • Fault tolerance • Takeover and giveback of partner storage • Elimination of most hardware components and cables as single points of failure • Improved data availability



Lenovo Accredited Learning Provider



27



HA pairs provide hardware redundancy that is required for nondisruptive operations (NDO) and fault tolerance. The hardware redundancy gives each node in the pair the software functionality to take over and return partner storage. The features also provide the fault tolerance required to perform NDO during hardware and software upgrades or maintenance. A storage system has various single points of failure, such as certain cables or hardware components. An HA pair greatly reduces the number of single points of failure. If a failure occurs, the partner can take over and continue serving data until the failure is fixed. The controller failover function provides continuous data availability and preserves data integrity for client applications and users. Nodes can be mixed! For example DM5000AF and DM7000H, also disks can be shared, but NOT between LENOVO and NetApp



Lenovo Accredited Learning Provider



27



Lenovo DM Storage – Technical Workshop – Student Lesson Guide



Network Cluster Interconnect



• Cluster interconnect: – Connection of nodes – Private network



• Management network: – Cluster administration – Ethernet network that can be shared with data



• Data network: Management Network Data Network



– One or more networks for data access from clients or hosts – Ethernet, FC, or converged network



Lenovo Accredited Learning Provider



28



Clusters require one or more networks, depending on the environment. In multinode clusters, nodes need to communicate with each other over a cluster interconnect. In a two-node cluster, the interconnect can be switchless. Clusters with more than two nodes require a private cluster interconnect that uses switches. The management network is for cluster administration. Redundant connections to the management ports on each node and management ports on each cluster switch should be provided to the management network. In smaller environments, the management and data networks might be on a shared Ethernet network. For clients and hosts to access data, a data network is also required. The data network includes one or more networks that are primarily used for data access by clients or hosts. Depending on the environment, there might be an Ethernet, FC, or converged network. Data networks can consist of one or more switches or even redundant networks. - e0m… – MANAGEMENT ports



Lenovo Accredited Learning Provider



28



Lenovo DM Storage – Technical Workshop – Student Lesson Guide



Two-Node switchless Cluster



In a two-node switchless cluster, ports are connected between nodes. Lenovo Accredited Learning Provider



29



In a two-node cluster, the cluster interconnect can be switchless. The slide shows a DM system that has two controllers installed in the chassis. Each controller has a set of onboard Gigabit Ethernet ports that can be used to connect to the cluster interconnect. The slide shows a two-node switchless cluster with a redundant pair of the ports cabled using 10GbE ports.



Lenovo Accredited Learning Provider



29



Lenovo DM Storage – Technical Workshop – Student Lesson Guide



Switched Clusters Cluster Interconnect Cluster Switch



Inter-Switch Links (ISLs) Cluster Switch



More networking details are discussed in the Network Management module.



Lenovo Accredited Learning Provider



30



If your workload requires more than two nodes, the cluster interconnect requires switches. The cluster interconnect requires two dedicated switches for redundancy and load balancing. Inter-Switch Links (ISLs) are required between the two switches. From each node, there should always be at least two cluster connections, one to each switch. The required connections vary, depending on the controller model. After the cluster interconnect is established, you can add more nodes, as your workload requires. For more information about the maximum number and models of controllers supported, see the ONTAP Storage Platform Mixing Rules in the ONTAP Library. For more information about the cluster interconnect and connections, see the ONTAP Network Management Guide. Use dedicated switches – LENOV switches for example



Lenovo Accredited Learning Provider



30



Lenovo DM Storage – Technical Workshop – Student Lesson Guide



Hardware Setup Connect the following hardware: • HA interconnect - internal – – – –



• • • • •



Failover Disk firmware Heartbeat Version information



Controllers to disk shelves – SAS Define disk shelves IDs Controllers to cluster interconnect – 10GbEth Controllers to networks – UTP, optical Controllers and disk shelves to power



Data Network Lenovo Accredited Learning Provider



31



Connect controllers to disk shelves. Verify that shelf IDs are set properly. You have to connect NVRAM HA cable between partners. The connections can be through the chassis, 10-GbE, or InfiniBand, depending on your storage controllers. Connect controllers to networks. Connect any tape devices that you might have. (You can connect tape devices later.) Connect controllers and disk shelves to power.



Lenovo Accredited Learning Provider



31



Lenovo DM Storage – Technical Workshop – Student Lesson Guide



Basic Steps to Set Up a Cluster 1. 2. 3. 4. 5. 6. 7.



Connect controllers, disks, and cables. Set up Shelves IDs (if you have external shelves) Set up and configure nodes – serial cable Set up cluster. Create data aggregates. Create a data storage virtual machine (SVM). Create data volumes and configure protocols



Lenovo Accredited Learning Provider



32



To set up a cluster, you must first connect the controller, disks, and cables. Power on the networking first, then disk shelves, and finally the controllers. If the system is new and does not require a software upgrade (or downgrade), simply start the setup process. If the system requires an upgrade or downgrade, install the software first. After the software installation is complete, initialize the disks. Initialization takes some time. When the system boots completely, run a setup procedure to set up and configure the system or cluster. After the configuration is complete, you can create storage resources. Step 5. – DATA aggregates – This is storage pool of hdd/SSDs/NVMe disks, configured within an ARRAY, protected with RAID parity/multiparity



Lenovo Accredited Learning Provider



32



Lenovo DM Storage – Technical Workshop – Student Lesson Guide



Creating a Cluster • Cluster creation methods: – Cluster setup wizard, using the CLI (boud rate 115000) – Guided Cluster Setup, using ThinkSystem System Manager



• CLI cluster setup wizard: 1. 2. 3.



Create the cluster on the first node. Join the remaining nodes to the cluster. Configure the cluster time and AutoSupport.



• Guided Cluster Setup: 1. 2.



Use the CLI to configure the node management interface. Use a web browser to connect to the node management IP address.



Lenovo Accredited Learning Provider



33



After installing the hardware, you can set up the cluster by using the cluster setup wizard (through the CLI). Before setting up a cluster, you should use a cluster setup worksheet and record the values that you need during the setup process. Worksheets are available on the ONTAP Support website. If you use the System Setup software, enter the information that you collected on the worksheet as the software prompts you. Whichever method you select, you begin by using the CLI to enter the cluster setup wizard from a single node in the cluster. The cluster setup wizard prompts you to configure the node management interface. Next, the cluster setup wizard asks whether you want to complete the setup wizard by using the CLI. If you press Enter, the wizard continues to use the CLI to guide you through the configuration. When you are prompted, enter the information that you collected on the worksheet. After creating the cluster, you use the node setup wizard to join nodes to the cluster one at a time. The node setup wizard helps you to configure each node's node-management interface. After using the CLI to add all nodes, you also need to manually configure a few items. Synchronizing the time ensures that every node in the cluster has the same time and prevents CIFS and Kerberos failures. You need to decide where to send event notifications: to an email address, a syslog server, or an SNMP traphost. ONTAP also recommends that you configure the AutoSupport support tool. If you choose to use Guided Cluster Setup instead of the CLI, use a web browser to connect to the node management IP that you configured on the first node. When you are prompted, enter the information that you collected on the worksheet. The Guided Cluster Setup discovers all of the nodes in the cluster, and then configures the nodes simultaneously. Lenovo Accredited Learning Provider



33



Lenovo DM Storage – Technical Workshop – Student Lesson Guide



Guided Cluster Setup •



Set up the node management interface







Boot a node that is part of the cluster. From the node management IP interface for the node, launch the cluster setup wizard. From the following URL, continue the cluster setup:











– https://



Lenovo Accredited Learning Provider



34



From the cluster setup wizard in ONTAP 9.x software, you can continue using the CLI or resume the cluster setup from a web browser.



Lenovo Accredited Learning Provider



34



Lenovo DM Storage – Technical Workshop – Student Lesson Guide



GUI Cluster Setup



Select to autoconfigure storage



Lenovo Accredited Learning Provider



Lenovo Accredited Learning Provider



35



35



Lenovo DM Storage – Technical Workshop – Student Lesson Guide



Cluster Administrators • Manage the entire cluster: – All cluster resources – SVM creation and management – Access control and roles – Resource delegation



• Use login credentials: – Username (default): admin – Password: password that you created during cluster setup



Lenovo Accredited Learning Provider



36



After you use System Setup to create the cluster, a link is provided to launch ThinkSystem Storage Manager. Log in as the cluster administrator to manage the entire cluster. You manage all cluster resources, the creation and management of SVMs, access control and roles, and resource delegation. To log in to the cluster, use the default username “admin” and the password that you configured during cluster creation.



Lenovo Accredited Learning Provider



36



Lenovo DM Storage – Technical Workshop – Student Lesson Guide



Managing Resources in a Cluster •



ThinkSystem Storage Manager: – – – –



Visual representation of the available resources Wizard-based resource creation Best practice configurations Limited advanced operations







The CLI: – Manual or scripted commands – Manual resource creation that might require many steps – Ability to focus and switch quickly among specific objects



Lenovo Accredited Learning Provider



37



You can use many tools to create and manage cluster resources. Each tool has advantages and disadvantages. ThinkSystem Storage Manager is a web-based UI that provides a visual representation of the available resources. Resource creation is wizard-based and adheres to best practices. However, not all operations are available. Some advanced operations might need to be performed by using commands in the CLI. You can use the CLI to create and configure resources. Enter commands manually or through scripts. Instead of the wizards that System Manager uses, the CLI might require many manual commands to create and configure a resource. Although manual commands give the administrator more control, manual commands are also more prone to mistakes that can cause issues. One advantage of using the CLI is that the administrator can quickly switch focus without needing to move through System Manager pages to find different objects.



Lenovo Accredited Learning Provider



37



Lenovo DM Storage – Technical Workshop – Student Lesson Guide



ThinkSystem Storage Manager (9.7)



Lenovo Accredited Learning Provider



Lenovo Accredited Learning Provider



38



38



Lenovo DM Storage – Technical Workshop – Student Lesson Guide



Clustershell The default CLI, or shell, in ONTAP is called the clustershell and has the following features: • Inline help • Online manual pages • Command history • Ability to reissue a command • Keyboard shortcuts • Queries and UNIX-style patterns • Wildcards



Lenovo Accredited Learning Provider



39



The cluster has different CLIs or shells for different purposes. This course focuses on the clustershell, which starts automatically when you log in to the cluster. Clustershell features include inline help, an online manual, history and redo commands, and keyboard shortcuts. The clustershell also supports queries and UNIXstyle patterns. Wildcards enable you to match multiple values in command-parameter arguments.



Lenovo Accredited Learning Provider



39



Lenovo DM Storage – Technical Workshop – Student Lesson Guide



Hardware Management Interface  Service Processor (SP/BMC) available through Ethernet or serial console connections – even without ONTAP installation  Shares the same Ethernet interface with e0M



 Service Processor is used to generate system cores  Non-maskable interrupt (NMI) reset button is not present



 Actively manages some hardware  Advanced sensor management, FRU tracking, fans  Maintains the system event log (SEL) for hardware troubleshooting



 Access to the ONTAP console using „system console“ command



Lenovo Accredited Learning Provider



Lenovo Accredited Learning Provider



40



40



Lenovo DM Storage – Technical Workshop – Student Lesson Guide



Lenovo ONTAP choices Fulfil wide range of customer needs with two ONTAP offerings



ONTAP Fundamentals



ONTAP Premium







Available on DM3000H &DM5000H







Available on all DM Series







Includes: • Unified Storage • Asynchronous replication • SnapRestore • Scale up to 84 drives







Everything in Fundamentals plus: • Synchronous Replication • SnapCenter • MetroCluster • Scale up and out



ONTAP Fundamentals will be a great option for someone who wants a unified storage solution with data efficiency features, snapshots and async-replication. Lenovo Accredited Learning Provider



ONTAP Premium is ideal for customers who require systems with high availability, application aware snapshots and enhanced management capabilities. 41



ONTAP Fundamentals - available on DM3000H and DM5000H only - includes: Unified Storage Asynchronous replication SnapRestore Scale up to 84 nodes ONTAP Premium - available on all DM Series - includes everything like in Fundamentals plus: Synchronous replication SnapCenter MetroCluster Scale up and out



41 Lenovo Accredited Learning Provider



Lenovo DM Storage – Technical Workshop – Student Lesson Guide



Licencing • •



DM Storage comes by default with all licences uploaded and enabled in ONTAP In case of deletetion of cluster – wipe configuration – all licences are gone



Licences are available vie the Lenovo LKMS website https://fod2.lenovo.com - After login use „Retrieve history ” option. - In “Search type*” select “Search history via UID” - Then in the “Search value“ enter serial number of each controller You will have to repeat this step for each controller as licences are based per controller



Lenovo Accredited Learning Provider



42



Each controller has it’s own serial number (S/N) and every license is bound to S/N – so a set of licenses for number of the nodes within cluster.



Lenovo Accredited Learning Provider



42



Lenovo DM Storage – Technical Workshop – Student Lesson Guide



Storage Virtual Machine - Types Data SVM: • Provides client access to user data • Includes data volumes, LIFs, and protocols and access control • Is for multiple use cases: – Secure multitenancy – Separation of resources and workloads – Delegation of management



Admin SVM: • Represents the cluster • One per cluster • Owns cluster-scoped resources



Node SVM: • Represents an individual node • One per node in the cluster • Owns node-scoped resources



DM3000::> vserver show



Vserver



Type



Subtype



Admin



Operational



State



State



Root Volume



Aggregate



----------- ------- ---------- ---------- -----------



----------



----------



DM3000



admin



-



-



-



-



-



DM3000_1



node



-



-



-



-



-



DM3000_2



node



-



-



-



-



-



svm_Team1



data



default



running



running



svm_Team1_root



data_001



svm_Team2



data



default



running



running



svm_Team2_root



data_001



svm_Team3



data



default



running



running



svm_Team3_root



data_002



6 entries were displayed.



Lenovo Accredited Learning Provider



43



A data SVM contains data volumes and LIFs that serve data to clients. Unless otherwise specified, the term “SVM” refers to a data SVM. In the CLI, SVMs are displayed as “Vservers.” SVMs might have one or more FlexVol volumes or scalable FlexGroup volumes.



Lenovo Accredited Learning Provider



43



Lenovo DM Storage – Technical Workshop – Student Lesson Guide



SVM Benefits • Unified storage (with FlexVol volumes): – NAS protocols: CIFS and NFS – SAN protocols: iSCSI and FC (including FCoE)



• Nondisruptive operations (NDO) and nondisruptive upgrade (NDU): – Resource migration – Resource availability during hardware and software upgrades



• Secure multitenancy: – Partitioning of a storage system – Isolation of data and management – No data flow among SVMs in the cluster



• Delegation of management: – User authentication and administrator authentication – Access assigned by the cluster administrator



• Scalability: – Addition and removal – Modification on demand to meet data-throughput and storage requirements



Lenovo Accredited Learning Provider



Lenovo Accredited Learning Provider



44



44



Lenovo DM Storage – Technical Workshop – Student Lesson Guide



Logical Storage • Storage virtual machine (SVM) – Is a container for data volumes, which store client data – Enables access to client data through a LIF



Data LIF



• Volume – Is a logical data container for files or LUNs – Can contain one or more FlexVol volumes on an SVM or one scalable infinite volume on an SVM



SVM with FlexVol Volumes



Client Access



• LIF – Represents the network address that is associated with a port – Provides access to client data



Cluster



Lenovo Accredited Learning Provider



45 45



A storage virtual machine (SVM) contains data volumes and LIFs. The data volumes store client data which is accessed through a LIF. A volume is a logical data container that might contain files or LUNs. SVMs might have one or more FlexVol volumes or a single, scalable infinite volume. A LIF represents the IP address or WWPN that is associated with a port. Data LIFs are used to access client data.



Lenovo Accredited Learning Provider



45



Lenovo DM Storage – Technical Workshop – Student Lesson Guide



SVM with FlexVol Volumes • FlexVol volume – In a NAS environment, represents the file system – In a SAN environment, contains LUNs



• Qtree – Partitions FlexVol volumes into smaller segments – Enables management of quotas, security styles, and CIFS opportunistic lock (oplock) settings



Qtree Q3 Q2 Q1



LUN SVM



Data LIF



Client Access Data LIF



Host Access



• LUN: – Is a logical unit that represents a SCSI disk



Cluster



Lenovo Accredited Learning Provider



46 46



In SVM can contain one or more FlexVol volumes (or simply volumes). In a NAS environment, volumes represent the file system where clients store data. In a SAN environment, LUNs are created in the volumes for a host to access. Qtrees partition a volume into smaller segments, much like directories. Qtrees can also be used to manage quotas, security styles, and CIFS opportunistic lock (oplock) settings. A LUN is a logical unit that represents a SCSI disk. In a SAN environment, the host operating system controls the reads and writes for the file system.



Lenovo Accredited Learning Provider



46



Lenovo DM Storage – Technical Workshop – Student Lesson Guide



Cluster Failover



Lenovo Accredited Learning Provider



47



If one node fails , automatically other controller will take over the storage. Enabling storage failover (SFO) is done within pairs, regardless of how many nodes are in the cluster. For SFO (storage failover), the HA pairs must be of the same model. The cluster itself can contain a mixture of models, but each HA pair must be homogeneous. The version of the Data ONTAP operating system must be the same on both nodes of the HA pair, except for the short period of time during which the pair is upgraded. Two HA interconnect cables are required to connect the NVRAM cards (except for FAS and VSeries 32×0 models with single-enclosure HA). The storage failover(SFO) can be enabled on either node in the pair. Storage Failover(SFO) can be initiated from any node in the cluster.



Lenovo Accredited Learning Provider



47



Lenovo DM Storage – Technical Workshop – Student Lesson Guide



ONTAP Networking



Lenovo Accredited Learning Provider



48



Lenovo DM Storage – Technical Workshop – Student Lesson Guide



ONTAP Network Types • Cluster interconnect: – Connection of nodes – Private network



• Management network: – Cluster administration network – Possibly a shared Ethernet network with data



• Data network: Management Network



– One or more networks that are used for data access from clients or hosts – Ethernet, FC, or converged network



Data Network Lenovo Accredited Learning Provider



49



In multinode clusters, nodes need to communicate with each other over a cluster interconnect. In a two-node cluster, the interconnect can be switchless. When you add more than two nodes to a cluster, a private cluster interconnect that uses switches is required. The management network is used for cluster administration. Redundant connections to the management ports on each node and management ports on each cluster switch should be provided to the management network. In smaller environments, the management and data networks might be on a shared Ethernet network. For clients and hosts to access data, a data network is required. The data network can be composed of one or more networks. Depending on the environment, the network might be an Ethernet, FC, or converged network. Data networks can consist of one or more switches or redundant networks.



Lenovo Accredited Learning Provider



49



Lenovo DM Storage – Technical Workshop – Student Lesson Guide



ONTAP Networking Logical



LIF



Virtual LAN (VLAN)



Virtual (Optional)



Physical



DM3000-mgmt



DM3000-data1



a0a-50



a0a-80



Interface Group (ifgroup)



Port



a0a



e2a



e3a



Network Ports



Lenovo Accredited Learning Provider



50



Nodes have physical ports that are available for cluster traffic, management traffic, and data traffic. The ports need to be configured appropriately for the environment. The example shows Ethernet ports. Physical ports also include FC ports and Unified Target Adapter (UTA) ports. Physical Ethernet ports can be used directly or combined by using interface groups (ifgroups). Also, physical Ethernet ports and ifgroups can be segmented by using virtual LANs (VLANs). VLANs and ifgroups are considered virtual ports but are treated like physical ports. Unless specified, the term network port includes physical ports, ifgroups, and VLANs.



Lenovo Accredited Learning Provider



50



Lenovo DM Storage – Technical Workshop – Student Lesson Guide



DM7000 Controller - Physical Ports Example



SAS



10GbE Cluster Ports e0a, e0b



10GBase-T Data Ports e0c, e0d



Additional PCI Cards eg. e1a, e1b eg. 2a, 2b



e0M & SP UTA2 Data Ports CNA - e0e, e0f, e0g, e0h FC – 0e, 0f, 0g, 0h



Console Port



Lenovo Accredited Learning Provider



51



4 x 10-GbE ports for cluster interconnects • Supported: Two cluster interconnects (e0a and e0c) and two data (e0b and e0d) ports • Recommended: Four cluster interconnects (switched clusters only) 4x Unified Target Adapter 2 (UTA2) ports that can be configured as either 10-GbE or 16Gbps FC for data • Only for data (not cluster interconnects) • Port pairs that must be set the same: • e0e/0e and e0f/0f, and e0g/0g and e0h/0h, are port pairs. • You can select from FC enhanced small-form factor pluggable (SFP+), 10-GbE SFP+, or Twinax Ethernet. • The set port mode command is ucadmin. • There are four GbE ports for data. One management port (default for node-management network): • e0M runs at GbE. • SP runs at 10/100. One private management port that is used as an alternate control path (ACP) for SAS shelves One console port that can be configured for Service Processor (SP) • To toggle from serial console into SP, press Ctrl+G. • To toggle back, press Ctrl+D.



Lenovo Accredited Learning Provider



51



Lenovo DM Storage – Technical Workshop – Student Lesson Guide



Physical Port Identification •



Ethernet ports are named e:



– e0a is the first port on the controller motherboard. – e2a is a port on a card in slot 2. •



FC ports are named :



– 0a is the first port on the controller motherboard. – 1a is a port on a card in slot 1. •



UTA ports have both an Ethernet name and an FC name, e/:



– e0a/0a is the first port on the controller motherboard. – e1a/1a is a port on a card in slot 1. – Use of show commands returns only FC label names (even in Ethernet mode).



Lenovo Accredited Learning Provider



52



Port names consist of two or three characters that describe the port type and location. Remember port-naming conventions on the network interfaces. Ethernet ports: The first character describes the port type and is always e to represent Ethernet. The second character is a numeral that identifies the slot in which the port adapter is located. The numeral 0 (zero) indicates that the port is on the node motherboard. The third character indicates the port position on a multiport adapter. For example, the port name e0b indicates the second Ethernet port on the motherboard, and the port name e3a indicates the first Ethernet port on an adapter in slot 3. FC ports: The name consists of two characters (dropping the e) but otherwise follows the same naming convention as Ethernet ports. For example, the port name 0b indicates the second FC port on the motherboard, and the port name 3a indicates the first FC port on an adapter in slot 3. UTA ports: A UTA port is physically one port but can pass either Ethernet traffic or FC traffic. Therefore, UTA ports are labeled with both the Ethernet name and the FC name. For example, the port name e0b/0b indicates the second UTA port on the motherboard, and the port name e3a/3a indicates the first UTA port on an adapter in slot 3. Note: UTA adapter ports are listed by only the FC label name when you use the ucadmin command, even when the personality is configured as 10-GbE.



Lenovo Accredited Learning Provider



52



Lenovo DM Storage – Technical Workshop – Student Lesson Guide



Modifying Network Port Attributes Set the UTA2 port personality



First remove any LIFs and take the port offline. DM3000::> system node hardware unified-connect modify -node DM3000_1 -adapter 0e -mode fc|cna DM3000::> system node reboot –node DM3000_1



Insert the proper optical module before changing modes.



Lenovo Accredited Learning Provider



53



UTA ports are managed in a similar way and require a reboot to take effect. The adapter must also be offline before any changes can be made. 



When the adapter type is initiator, use the run local storage disable adapter command to take the adapter offline.







When the adapter type is target, use the network fcp adapter modify command to take the adapter offline.



For more information about configuring FC ports, see the ONTAP SAN Administration Guide for your release, or attend the ONTAP University SAN Implementation course.



Lenovo Accredited Learning Provider



53



Lenovo DM Storage – Technical Workshop – Student Lesson Guide



ifgroups • Combination of one or more Ethernet interfaces • Three ifgroup modes: – Single-mode (active-standby) – Static multimode (active-active) – Dynamic multimode using Link Aggregation Control Protocol (LACP)



• Naming syntax: a (for example, a0a)



Multimode ifgroup



Single-Mode ifgroup



Active Standby Lenovo Accredited Learning Provider



54



An ifgroup combines one or more Ethernet interfaces, which can be implemented in one of three ways. In single mode, one interface is active and the other interfaces are inactive until the active link goes down. The standby paths are used only during a link failover. In static multimode, all links are active. Therefore, static multimode provides link failover and load-balancing features. Static multimode complies with the IEEE 802.3ad (static) standard and works with any switch that supports the combination of Ethernet interfaces. However, static multimode does not have control packet exchange. Dynamic multimode is similar to static multimode but complies with the IEEE 802.3ad (dynamic) standard. When switches that support Link Aggregation Control Protocol (LACP) are used, the switch can detect a loss of link status and dynamically route data. ONTAP recommends that when you configure ifgroups, you use dynamic multimode with LACP and compliant switches. All modes support the same number of interfaces per ifgroup, but the interfaces in the group should always be the same speed and type. The naming syntax for interface groups is the letter “a”, followed by a number, followed by a letter; for example, a0a. Vendors might use terms such as link aggregation, port aggregation, trunking, bundling, bonding, teaming, or EtherChannel.



Lenovo Accredited Learning Provider



54



Lenovo DM Storage – Technical Workshop – Student Lesson Guide



ifgroup Considerations • Because of the limited capabilities of single mode, you should not use a singlemode ifgroup in ONTAP software. • Use dynamic multimode (LACP) when you use ifgroups, to take advantage of performance and resiliency functionality: – An LACP-enabled switch is required. – All of the interfaces in the group are active, share a MAC address, and use loadbalancing for outbound (not inbound) traffic. – A single host does not achieve larger bandwidth than any of the constituent connections (two 10-GbE does not equal one 20-GbE). – Dynamic multimode might not have any advantages for iSCSI hosts.



Lenovo Accredited Learning Provider



55



You can configure ifgroups to add a layer of redundancy and functionality to an ONTAP software environment. You can also use ifgroups with a failover group to help protect against Layer 2 and Layer 3 Ethernet failures. A single-mode ifgroup is an active-passive configuration (one port sits idle, waiting for the active port to fail) and cannot aggregate bandwidth. ONTAP advises against using the single-mode type of ifgroup. To achieve as much redundancy, you can use failover groups or one of the two multimode methods. You might use a static multimode ifgroup if you want to use all the ports in the group to simultaneously service connections. Static multimode does differ from the type of aggregation that happens in a dynamic multimode ifgroup, in that no negotiation or automatic detection happens within the group in regard to the ports. A port sends data when the node detects a link, regardless of the state of the connecting port on the switch side. You might use a dynamic multimode ifgroup to aggregate bandwidth of more than one port. LACP monitors the ports on an ongoing basis to determine the aggregation capability of the ports. LACP also continuously provides the maximum level of aggregation capability achievable between a given pair of devices. However, all the interfaces in the group are active, share MAC address, and load-balance outbound traffic. A single host does not necessarily achieve larger bandwidth, exceeding the capabilities of any constituent connections. For example, adding four 10-GbE ports to a dynamic multimode ifgroup does not result in one 40-GbE link for one host. The situation is because of the way that both the switch and the node manage the aggregation of the ports in the ifgroup. A recommended best practice is to use the Lenovo Accredited Learning Provider



55



Lenovo DM Storage – Technical Workshop – Student Lesson Guide



dynamic multimode type of ifgroup so that you can take advantage of all the performance and resiliency functionality that the ifgroup algorithm offers. You can use two methods to achieve path redundancy when using iSCSI in ONTAP software: by using ifgroups or by configuring hosts to use multipath I/O over multiple distinct physical links. Because multipath I/O is required, ifgroups might have little value.



Lenovo Accredited Learning Provider



55



Lenovo DM Storage – Technical Workshop – Student Lesson Guide



Ports, ifgroups, and VLANs



LIF



LIF LIF



LIF



VLAN



LIF



LIF LIF



VLAN



VLAN



port



VLAN ifgrp



port LIF



LIF ifgrp port



port



port



port



NOTE: - VLANs and ifgroups cannot be created on cluster interconnect ports. - e0M/SP port doesn‘t support VLAN



Lenovo Accredited Learning Provider



56



Most small to medium environments and FC environments use physical ports. Ethernet environments in which multiple physical networks are impossible often use VLANs to separate management traffic from data traffic. VLANs are also often used to separate differing workloads. For example, you might separate NAS traffic from iSCSI traffic for performance and security reasons. In Ethernet environments where many application servers or hosts share switches and ports, dynamic multimode ifgroups of four Ethernet ports per node are commonly used for load balancing. Environments that use ifgroups typically also use VLANs to segment the network. Segmentation is typical for service providers with multiple clients that require the bandwidth that ifgroups provide and the security that VLANs provide. And lastly, it is not uncommon for different types of ports to be used in mixed environments that have various workloads. For example, an environment might use ifgroups with VLANs that are dedicated to NAS protocols, a VLAN that is dedicated to management traffic, and physical ports for FC traffic. ifgroups and VLANs cannot be created on cluster interconnect ports.



Lenovo Accredited Learning Provider



56



Lenovo DM Storage – Technical Workshop – Student Lesson Guide



VLANs Switch 1 e0a-40 Switch 2



Router



Mgmt Switch VLAN10 Clients



VLAN20 Tenant B



VLAN30 Tenant A



VLAN40 Mgmt



Lenovo Accredited Learning Provider



57



A port or ifgroup can be subdivided into multiple VLANs. Each VLAN has a unique tag that is communicated in the header of every packet. The switch must be configured to support VLANs and the tags that are in use. In ONTAP software, a VLAN ID is configured into the name. For example, VLAN e0a-70 is a VLAN with tag 70 configured on physical port e0a. VLANs that share a base port can belong to the same or different IPspaces, and the base port can be in a different IPspace than the VLANs that share the base port. IPspaces are covered later in this module.



Lenovo Accredited Learning Provider



57



Lenovo DM Storage – Technical Workshop – Student Lesson Guide



IPspaces Default IPspace



Company A IPspace



Company B IPspace



SVM 1



SVM_A-1



SVM_B-1



SVM 2



SVM_A-2



SVM_B-2



Company A Routing Table



Company B Routing Table



Company A: 10.0.0.0



Company B: 10.0.0.0



Default Routing Table Storage SP Point of Presence



Default: 192.168.0.0



192.168.0.5



10.0.5.10



10.0.5.10



Lenovo Accredited Learning Provider



58



The IPspace feature enables clients from more than one disconnected network to access a storage system or cluster, even if the clients use the same IP address. An IPspace defines a distinct IP address space in which virtual storage systems can participate. IP addresses that are defined for an IPspace are applicable only within the IPspace. A distinct routing table is maintained for each IPspace. No cross-IPspace traffic routing occurs. Each IPspace has a unique assigned loopback interface. The loopback traffic on each IPspace is isolated from the loopback traffic on other IPspaces. Example A storage SP needs to connect customers of companies A and B to a storage system on the storage SP premises. The storage SP creates SVMs on the cluster—one per customer. The storage SP then provides one dedicated network path from one SVM to the A network and another dedicated network path from the other SVM to the B network. The deployment should work if both companies use nonprivate IP address ranges. However, because the companies use the same private addresses, the SVMs on the cluster at the storage SP location have conflicting IP addresses. To overcome the problem, two IPspaces are defined on the cluster—one per company. Because a distinct routing table is maintained for each IPspace, and no cross-IPspace traffic is routed, the data for each company is securely routed to the respective network. Data is securely routed even if the two SVMs are configured in the 10.0.0.0 address space. Lenovo Accredited Learning Provider



58



Lenovo DM Storage – Technical Workshop – Student Lesson Guide



Broadcast Domains Overview • Broadcast domains enable you to group network ports that belong to the same Layer 2 network. • An SVM can then use the ports in the group for data or management traffic.



Default Broadcast Domain Company A Broadcast Domain



Company B Broadcast Domain



Broadcast domains can contain physical ports, ifgroups, and VLANs. Lenovo Accredited Learning Provider



59



Broadcast domains are often used when a system administrator wants to reserve specific ports for use by a certain client or group of clients. A broadcast domain should include ports from many nodes in the cluster, to provide high availability for the connections to SVMs. Default has to be set. The figure shows the ports that are assigned to three broadcast domains in a four-node cluster: • The Default broadcast domain, which was created automatically during cluster initialization, is configured to contain a port from each node in the cluster. • The Company A broadcast domain was created manually and contains one port each from the nodes in the first HA pair. • The Company B broadcast domain was created manually and contains one port each from the nodes in the second HA pair. • The Cluster broadcast domain was created automatically during cluster initialization but is not shown in the figure. The system administrator created the two broadcast domains specifically to support the customer IPspaces.



Lenovo Accredited Learning Provider



59



Lenovo DM Storage – Technical Workshop – Student Lesson Guide



Failover Groups Versus Failover Policies A failover group is a list of ports (physical or virtual):  Defines the targets for the LIF  Is automatically created when you create a broadcast domain  Does not apply to iSCSI or FC SAN LIFs



Failover Policy



Failover Group A failover policy restricts the list of ports within a failover group.



Ports to Migrate Lenovo Accredited Learning Provider



Lenovo Accredited Learning Provider



60



60



Lenovo DM Storage – Technical Workshop – Student Lesson Guide



Subnets •







Subnets enable the allocation of specific blocks, or pools, of IP addresses for easier LIF creation. A subnet is created within a broadcast domain and contains a pool of IP addresses that belong to the same Layer 3 subnet.



Default Broadcast Domain



192.168.0.1



to subnet



192.168.0.100



Company A Broadcast Domain



to subnet



Company B Broadcast Domain



10.1.2.5 to subnet 10.1.2.100



10.1.2.5



10.1.2.20



Subnets are recommended for easier LIF creation.



Lenovo Accredited Learning Provider



61



Subnets enable you to allocate specific blocks, or pools, of IP addresses for your ONTAP network configuration. The allocation enables you to create LIFs more easily when you use the network interface create command, by specifying a subnet name instead of specifying IP address and network mask values. IP addresses in a subnet are allocated to ports in the broadcast domain when LIFs are created. When LIFs are removed, the IP addresses are returned to the subnet pool and are available for future LIFs. You should use subnets because subnets simplify the management of IP addresses and the creation of LIFs. Also, if you specify a gateway when defining a subnet, a default route to that gateway is added automatically to the SVM when a LIF is created using that subnet. More: https://en.wikipedia.org/wiki/Subnetwork



Lenovo Accredited Learning Provider



61



Lenovo DM Storage – Technical Workshop – Student Lesson Guide



Subnets Subnets and gateways •



When creating subnets, considering the following information:



– If a gateway is specified, then when a LIF is created using the subnet, a default route is added automatically to the SVM. – If you do not use subnets, or if you do not specify a gateway when defining a subnet, then you must use the route create command to add a route to the SVM manually. •



If you add or change the gateway IP address, the following changes occur:



– The modified gateway is applied to new SVMs when a LIF is created in an SVM that uses the subnet. – A default route to the gateway is created for the SVM (if the route does not already exist). NOTE: When you change the gateway IP address, you might need to manually add a new route to the SVM. Lenovo Accredited Learning Provider



Lenovo Accredited Learning Provider



62



62



Lenovo DM Storage – Technical Workshop – Student Lesson Guide



IPspace Review



IPspace



Broadcast Domain



Storage Virtual Machine (SVM)



Subnet Port



LIF 10.0.5.10



IP Addresses: 10.0.5.1 – 10.0.5.100



Lenovo Accredited Learning Provider



63



ONTAP software has a set of features that work together to enable multitenancy. An IPspace is a logical container that is used to create administratively separate network domains. An IPspace defines a distinct IP address space that contains storage virtual machines (SVMs). The IPspace contains a broadcast domain, which enables you to group network ports that belong to the same Layer 2 network. The broadcast domain contains a subnet, which enables you to allocate a pool of IP addresses for your ONTAP network configuration. When you create a LIF on the SVM, the LIF represents a network access point to the node. You can manually assign the IP address for the LIF. If a subnet is specified, the IP address is automatically assigned from the pool of addresses in the subnet, much like how a Dynamic Host Configuration Protocol (DHCP) server assigns IP addresses.



Lenovo Accredited Learning Provider



63



Lenovo DM Storage – Technical Workshop – Student Lesson Guide



LIFs •



An IP address or worldwide port name (WWPN) is associated with a LIF: – If subnets are configured (recommended), then when the LIF is created, IP addresses are automatically assigned. – If subnets are not configured, then when the LIF is created, IP addresses must be manually assigned. – When an FC LIF is created, WWPNs are automatically assigned.



• • • • •



One node-management LIF exists per node. One cluster-management LIF exists per cluster. Cluster LIFs depend on the cluster configuration. Multiple data LIFs are enabled per port (client-facing for NFS, CIFS, iSCSI, and FC access). For intercluster peering, intercluster LIFs must be created on each node. – Used for Cluster to Cluster connections



Lenovo Accredited Learning Provider



64



Data LIFs can have a many-to-one relationship with network ports: Many data IP addresses can be assigned to a single network port. If the port becomes overburdened, NAS data LIFs can be transparently migrated to different ports or nodes. Clients know the data LIF IP address but do not know which node or port hosts the LIF. If a NAS data LIF is migrated, the client might unknowingly be contacting a different node. The NFS mount point or CIFS share is unchanged.



Lenovo Accredited Learning Provider



64



Lenovo DM Storage – Technical Workshop – Student Lesson Guide



Network Interfaces Review Logical



Virtual (Optional)



Physical



LIF



VLAN



Team1-mgmt



Team1-data1



a0a-50



a0a-80



Ifgroup



Port



Network Ports



a0a



e2a



e3a



Lenovo Accredited Learning Provider



65



A LIF is associated with a physical port, an ifgroup, or a VLAN. Virtual storage systems— VLANs and SVMs—own the LIFs. Multiple LIFs that belong to multiple SVMs can reside on a single port.



Lenovo Accredited Learning Provider



65



Lenovo DM Storage – Technical Workshop – Student Lesson Guide



ONTAP Architecture



Lenovo Accredited Learning Provider



66



Lenovo DM Storage – Technical Workshop – Student Lesson Guide



ONTAP Storage Architecture



Files and LUNs



Logical Layer FlexVol Volumes



Aggregate



Physical Layer RAID Groups of Disks



Lenovo Accredited Learning Provider



67



The ONTAP software storage architecture uses a dynamic virtualization engine in which data volumes are dynamically mapped to physical space. In ONTAP software, disks are grouped into RAID groups. An aggregate is a collection of physical disk space that contains one or more RAID groups. Each aggregate has a RAID configuration and a set of assigned disks. The disks, RAID groups, and aggregates make up the physical storage layer. Within each aggregate, you can create one or more FlexVol volumes. A FlexVol volume is an allocation of disk space that is a portion of the available space in the aggregate. A FlexVol volume can contain files or LUNs. The FlexVol volumes, files, and LUNs make up the logical storage layer.



Lenovo Accredited Learning Provider



67



Lenovo DM Storage – Technical Workshop – Student Lesson Guide



Disks and Aggregates • What happens when a disk is inserted into a system: Unowned Disks



– The disk is initially unassigned. – By default, disk ownership is assigned automatically. – Disk ownership can be changed.



• What happens after ownership is assigned: Spare Disks Aggregate



– The disk functions as a hot spare. – The disk can be assigned to an aggregate.



Lenovo Accredited Learning Provider



68



When a disk is inserted into a storage system disk shelf or when a new shelf is added, the controller takes ownership of the disk by default. In a high-availability (HA) pair, only one controller can own a particular disk, but ownership can be manually assigned to either controller. When an aggregate is created or disks are added to an aggregate, the spare disks are used.



Lenovo Accredited Learning Provider



68



Lenovo DM Storage – Technical Workshop – Student Lesson Guide



Create an Aggregate Information to provide: • Aggregate name • Disk type • Owning node • Number of disks • RAID type



DM3000::> aggr create -aggregate rtp01_fcal_001 -node DM3000_1 -disktype fcal -diskcount 8 Lenovo Accredited Learning Provider



69



For most disk types, RAID DP is the default. Beginning with ThinkSystem Storage Manager 9.1, RAID-TEC is the only available RAID type if the following are true:  The disk type of the aggregate disks is FSAS or mSATA  The disk size is equal to or larger than 10 TB



Lenovo Accredited Learning Provider



69



Lenovo DM Storage – Technical Workshop – Student Lesson Guide



RAID Types RAID groups can be one of the following: • RAID 4 group – Single parity for single-disk failure – Minimum of two disks



• RAID DP group



RAID 4



Data Disks



Parity Disk



RAID DP



– Double parity for double-disk failure – Minimum of three disks



• RAID-TEC group – Triple parity for triple-disk failure



Data Disks



Parity Disk



dParity Disk



RAID-TEC



– Minimum of four disks – Less time to rebuild a large disk that fails Data Disks



Parity Disk



dParity Disk



tParity Disk



Lenovo Accredited Learning Provider



70



Understanding how RAID protects your data and data availability can help you to administer your storage systems more effectively. For native storage, ONTAP software uses ONTAP RAID DP (double-parity) technology or RAID 4 protection to provide data integrity within a RAID group, even if one or two of the disks fail. Parity disks provide redundancy for the data that is stored on the data disks. If a disk fails, the RAID subsystem can use the parity disks to reconstruct the data in the failed disk. NOTE: The minimum number of disks per RAID group for each RAID type is a standard RAID specification. During aggregate creation, ONTAP imposes a seven-disk minimum for aggregates with RAID-TEC groups, a five-disk minimum for aggregates with RAID DP groups, and a four-disk minimum for aggregates with RAID 4 groups.



Lenovo Accredited Learning Provider



70



Lenovo DM Storage – Technical Workshop – Student Lesson Guide



ONTAP RAID Technologies Description D0



D1



D2



D3



D4



D5



RP



DP



TP



• RAID 4 (row parity) – Add a row parity disk – Protect against single-disk failure, media error



• RAID-DP (double parity) technology – Adds a diagonal parity disk to a RAID 4 group – Protect against two disk failures, disk failure and media error, and double media errors D0



D1



D2



D3



D4



D5



RP



DP



TP



• RAID-TEC (triple erasure coding) technology – Adds a triple-parity disk to a RAID-DP group – Protect against three-disk failure, two disk failure and media error, and triple media errors



Lenovo Accredited Learning Provider



71



RAID 4 In a RAID 4 group, parity is calculated separately for each row. In the example, the RAID 4 group contains seven disks, with each row containing six data blocks and one parity block. RAID-DP Technology In a RAID-DP group, a diagonal parity set is created in addition to the row parity. Therefore, an extra double-parity disk must be added. In the example, the RAID-DP group contains eight disks, with the double parity calculated diagonally by using seven parity blocks.  The number in each block indicates the diagonal parity set to which the block belongs.  Each row parity block contains even parity of data blocks in the row, not including the diagonal parity block.  Each diagonal parity block contains even parity of data and row parity blocks in same diagonal. RAID-TEC Technology In a RAID-TEC group, an anti-diagonal parity set is created in addition to both the row parity and diagonal parity sets. Therefore, an extra third-parity disk must be added. In the example, the RAID-TEC group contains nine disks, with the triple parity calculated anti-diagonally using seven parity blocks.  Seven diagonals (parity blocks) exist, but ONTAP software stores six diagonals (p-1).  The missed diagonal selection is arbitrary. Here, diagonal 6 is missing and is not



Lenovo Accredited Learning Provider



71



Lenovo DM Storage – Technical Workshop – Student Lesson Guide



stored or calculated. Regarding diagonal numbers, the following guidelines apply:   



The set of diagonals collectively span all of the data disks and the row parity disk. Each diagonal misses only one disk, and each diagonal misses a different disk. Each disk misses a different diagonal. The diagonal sequencing within a given disk starts with the diagonal number that corresponds with the given disk number. So the first diagonal on disk number 0 is diagonal 0, and the first diagonal on disk N is diagonal N. The diagonals on the disk wrap around when the end of the diagonal set is reached.



Lenovo Accredited Learning Provider



71



Lenovo DM Storage – Technical Workshop – Student Lesson Guide



RAID Group Sizes •



Requirements and options:



– Default for near-line class disks (SATA or NL-SAS) of 6 TB or larger HDDs – Required for 10 TB and larger HDDs – Optional for other disks (SSD or SAS) •



Group Type



Default



Maximum



SATA



RAID4



7



7



RAID-DP



14



20



NL-SAS



Default RAID group sizes:



– 21 disks for SATA or NL-SAS disks – 24 disks for SAS disks •



Disk Type



Ability to upgrade and downgrade nondisruptively between RAID types



SAS



SSD



RAID-TEC



21



29



RAID4



7



7



RAID-DP



14



20



RAID-TEC



21



29



RAID4



8



14



RAID-DP



16



28



RAID-TEC



24



29



RAID4



8



14



RAID-DP



23



28



RAID-TEC



24



29



Lenovo Accredited Learning Provider



72



To create a RAID-TEC aggregate, you need a minimum of seven disks. RAID4 can only be set up through CLI



Lenovo Accredited Learning Provider



72



Lenovo DM Storage – Technical Workshop – Student Lesson Guide



Adding Capacity to Aggregates Provide the following information: • Aggregate name • Disk type • Number of disks



You cannot shrink aggregates. DM3000::> storage disk show -spare -owner rtp-nau-01 DM3000::> storage aggregate add-disks –aggr rtp01_fcal_001 disks 2 Lenovo Accredited Learning Provider



Lenovo Accredited Learning Provider



73



73



Lenovo DM Storage – Technical Workshop – Student Lesson Guide



Advanced Drive Partitioning  



Root-Data HDD Partitioning (first 24 HDDs, All Hybrid) Root-Data-Data SSD Partitioning (first 48 SSDs, All Flash) Root-Data-Data (RD2) Partitioning



Root-Data (RD) Partitioning



P1



SSD



SSD



HDD



Data Partition



P1



P2 P1



P1Data



Partition



Root Partition Lenovo Accredited Learning Provider



Lenovo Accredited Learning Provider



P2



Root Partition



P2



P3



74



74



Lenovo DM Storage – Technical Workshop – Student Lesson Guide



Root-Data Advanced Disk Partitioning



DM3000_1



• •



DP



R



P



R



R



R



R



R



R



S



R



S



R



DP



R



P



R



R



R



R



S



R



S



R



DP



P



D



D



D



D



D



D



D



D



D



S



DP



P



D



D



D



D



D



D



D



D



D



S



0



1



2



3



4



5



6



7



8



9



10



11



12



13



14



15



16



17



18



19



20



21



22



23



DM3000_2



SSDs are partitioned into one small root partition and one large data partition. Standard aggregate configuration per node is as follows: – A root aggregate RAID group of 8 data + 2 parity partitions, and 2 spare root partitions – A data aggregate RAID group of 9 data + 2 parity partitions, and 1 spare data partition







Total usable capacity is 18 data partitions out of a total of 24—75% efficiency.



Lenovo Accredited Learning Provider



75



The figure shows the default configuration for a single-shelf All Flash DM system in Data ONTAP 8.3.x software.



Lenovo Accredited Learning Provider



75



Lenovo DM Storage – Technical Workshop – Student Lesson Guide



Root-Data-Data Advanced Disk Partitioning ONTAP 9 and later software



DM3000_1



DP



R



P



R



R



R



R



R



R



S



R



S



R



DP



R



P



R



R



R



R



S



R



R



S



DP



P



D



D



D



D



D



D



D



D



D



D



D



D



D



D



D



D



D



D



D



D



D



S



DP



P



D



D



D



D



D



D



D



D



D



D



D



D



D



D



D



D



D



D



D



D



D



S



0



1



2



3



4



5



6



7



8



9



10



11



12



13



14



15



16



17



18



19



20



21



22



23



DM3000_2



• SSDs are partitioned into one small root and two data partitions, each of which is half the size of a root-data partition. • The standard aggregate configuration per node is as follows: – A root aggregate RAID group of 8 data + 2 parity partitions and 2 spare root partitions (no change from root-data partition) – A data aggregate RAID group of 21 data + 2 parity partitions and 1 spare data partition



• The total usable capacity is 42 data partitions out of a total of 48: 87.5% efficiency, or 16.7% more usable capacity (0.875 / 0.75). Lenovo Accredited Learning Provider



76



The figure shows the default configuration for a single-shelf All Flash DM system in ONTAP 9 software.



Lenovo Accredited Learning Provider



76



Lenovo DM Storage – Technical Workshop – Student Lesson Guide



Root-Data-Data Advanced Disk Partitioning Additional root-data-data partitioning information • R



R R



R R



R



S



R



S



R DP R



P



R



R



R R



S



R



S



rg0



DP R D



D



D D



D D



D D



D



P D



S



D



D D



D



D



D



D D



S



D



P D



rg0



rg1



D



D



D D



D D



D D



D



P D



S



D



D D



D D



D D D



S



D



P D



rg1 DM3000_2



DM3000_1



P



R



0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 R R



R



R R



R R



R R



R R



R R



R R



R



R



R



R R



R R



rg0



DP D D



R



D



D D



D D



D D



D D DP D D



R



D D



D D



D



D D



D D



rg0



rg1



DP D D



D



D D



D D



D D



D D DP D D



D D



D D



D



D D



D D



rg1



– Default root aggregate provisioning method for All Flash DM – Unsupported on entry-level DM or All Flash DM MetroCluster software







0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23



Data partition assignments with two shelves are similar to root-data partitioning: – Data partitions on an SSD are assigned to the same node. – Twice as many RAID groups are used.



• DM3000_1



Root-data-data partitioning is supported on only All Flash DM systems:



DM3000_2



Half-shelf All Flash DM systems have 50% more usable capacity than with root-data partitioning. All Flash DM systems with 12 x 3.8-TB or 12 x 15.3-TB SSDs are available with only ONTAP 9 software.



Lenovo Accredited Learning Provider



77



The figures show the default configuration for two-shelf and half-shelf All Flash DM systems in ONTAP 9 software. For root-data partitioning and root-data-data partitioning, RAID uses the partitions in the same way as physical disks. If a partitioned disk is moved to another node or used in another aggregate, the partitioning persists. You can use the disk only in RAID groups that are composed of partitioned disks. If you add an unpartitioned drive to a RAID group that consists of partitioned drives, the unpartitioned drive is partitioned to match the partition size of the drives in the RAID group. The rest of the disk is unused.



Lenovo Accredited Learning Provider



77



Lenovo DM Storage – Technical Workshop – Student Lesson Guide



ONTAP Virtual Storage Tier • Flash Cache intelligent caching: – Has highest performance for file services – Improves latency for random reads – Delivers predictable, high-speed data access



Server



• Flash Pool intelligent caching: – – – –



Has the highest performance for OLTP Is best for SATA enablement across multiple workloads Caches for random reads and writes Automates the use of SSD technology



Flash Cache



Flash Pool Storage



Lenovo Accredited Learning Provider



78



At the storage level, there are two ways to implement Virtual Storage Tier (VST): • The controller-based Flash Cache feature accelerates random-read operations and generally provides the highest performance solution for file-services workloads. • The Flash Pool feature is implemented at the disk-shelf level, enabling SSDs and traditional HDDs to be combined in a single ONTAP aggregate. Flash Pool technology provides read caching and write caching and is well-suited for OLTP workloads, which typically have a higher percentage of write operations. Both VST technologies improve overall storage performance and efficiency and are simple to deploy and operate.



Lenovo Accredited Learning Provider



78



Lenovo DM Storage – Technical Workshop – Student Lesson Guide



Flash Cache 2 Feature • Flash Cache caching includes the following elements: – 1-TB, 2-TB or 4TB Peripheral Component Interconnect Express (PCIe) module – Preinstalled in all DM-Series Hybrid models – Support for all protocols – An extension to the ONTAP WAFL file system buffer cache to save evicted buffers



• De-duplicated and compressed blocks are maintained in the cache. • Cache is shared by all volumes on a node.



Lenovo Accredited Learning Provider



79



Flash Cache intelligent caching combines software and hardware within ONTAP storage controllers to increase system performance without increasing the disk count. The Flash Cache plug-and-play Peripheral Component Interconnect Express (PCIe) module requires no configuration to use the default settings, which are recommended for most workloads. The original Flash Cache module is available in 256-GB, 512-GB, or 1-TB capacities and accelerates performance on all supported ONTAP client protocols. The Flash Cache controller-based solution is available to all volumes that are hosted on the controller. A common use case for Flash Cache is to manage VMware boot storms. Flash Cache 2 is the second generation of Flash Cache performance accelerators. The new architecture of Flash Cache 2 accelerators enables even higher throughput. For more information, see TR-3832: Flash Cache Best Practice Guide.



Lenovo Accredited Learning Provider



79



Lenovo DM Storage – Technical Workshop – Student Lesson Guide



Flash Pool Overview  Flash Pool cache is a least-recently used (LRU) read cache with writethrough caching for random overwrites



Maximum



 Frequently read data blocks are retained longer – “heat” map tracks block usage  Frequently updated blocks (random overwrites) are inserted directly into cache



 Repeat random-read workloads benefit most  Write-back policies enable pre-populating cache with new writes  Random overwrite caching helps keep latency low when blocks are updated



 Aggregate-level cache – cache used only by volumes in the aggregate  Cache is provisioned from whole SSDs or SSD storage pool allocation units  Cache is part of the aggregate, not separate



 Cache is persistent through planned and unplanned takeover/shutdown



Evict



 Works with compressed, deduplicated, cloned, and encrypted blocks  Supported with Volume Encrypted Systems  Support for SnapLock



Lenovo Accredited Learning Provider



Lenovo Accredited Learning Provider



80



80



Lenovo DM Storage – Technical Workshop – Student Lesson Guide



Add allocation units to Aggregate Can not be removed without destroying aggregate Storage Pool (ssd_pool_001)



Aggr2



Aggr1 DATA



DATA



DATA



DATA



PARITY



PARITY



HDD rg1



HDD rg0 HDD rg1 SSD rg2



HDD rg0



DATA



DATA



DATA



DATA



PARITY



PARITY



DATA



DATA



DATA



DATA



PARITY



PARITY



1



2



3



4



5



HDD rg2 SSD rg3 SSD rg4



DM3000::> storage aggregate add-disks –aggregate agg1_HDD -storage-pool ssd_pool_001 –allocation-units 2 –raidtype raid4 Lenovo Accredited Learning Provider



81



SSD partitioning define four allocation units, each with 25% SSD storage capacity that can be alocated per one or more aggregates. You can add one, two, three or four allocation units to aggregate – Best practice is addinng one allocation units at a time – add more only if really needed. NOTE: Allocation units can not be removed later on. If you want to change SSD FlashPool capacity you should destroy whole aggregate meaning losing all data on that aggregate.



Lenovo Accredited Learning Provider



81



Lenovo DM Storage – Technical Workshop – Student Lesson Guide



ONTAP Virtual Storage Tier Feature comparison Flash Cache What is the feature?  A controller-based PCIe card



Flash Pool



 A plug-and-play device



What is the feature? Storage-level, RAID-protected cache (specific to aggregates)



What does the feature do?



What does the feature do?



 Provides per-controller cache



 Caches random reads and overwrites



 Caches random reads



 Provides cached data persistence through failovers



Where does the feature fit?



Where does the feature fit?



 With random-read workloads, such as file services



 With random-overwrite-heavy workloads such as OLTP workloads



 With workloads that contain multiple volumes that are in various aggregates on a controller



 Where consistent performance is required



Flash Cache and Flash Pool can be used together – increasing cache capacity Lenovo Accredited Learning Provider



82



The Flash Cache and Flash Pool features bring flash technology to ONTAP software. The table compares the primary uses and benefits of both features.



Lenovo Accredited Learning Provider



82



Lenovo DM Storage – Technical Workshop – Student Lesson Guide



ONTAP Storage Architecture



Files and LUNs



Logical Layer FlexVol Volumes



Aggregate



Physical Layer RAID Groups of Disks Lenovo Accredited Learning Provider



83



The ONTAP storage architecture uses a dynamic virtualization engine, in which data volumes are dynamically mapped to physical space. In ONTAP, disks are grouped into RAID groups. An aggregate is a collection of physical disk space that contains one or more RAID groups. Each aggregate has a RAID configuration and a set of assigned disks. The disks, RAID groups, and aggregates make up the physical storage layer. Within each aggregate, you can create one or more FlexVol volumes. A FlexVol volume is an allocation of disk space that is a portion of the available space in the aggregate. A FlexVol volume can contain files or LUNs. The FlexVol volumes, files, and LUNs make up the logical storage layer.



Lenovo Accredited Learning Provider



83



Lenovo DM Storage – Technical Workshop – Student Lesson Guide



Volume Properties



Actions that can be taken on volumes:      



Create Edit Resize Delete Clone Move



Volume options:  Storage efficiency  Storage quality of service (QoS)



Lenovo Accredited Learning Provider



Lenovo Accredited Learning Provider



Tools to protect volumes:  Snapshot copies  Mirror copies  Vaults



84



84



Lenovo DM Storage – Technical Workshop – Student Lesson Guide



Automatic Resizing of Volumes • Automatic resizing of volumes enables a FlexVol volume to automatically grow or shrink the maximum space capacity of the volume. • You can specify a mode: – Off: Volume does not grow or shrink. – Grow: Volume automatically grows when space in the volume reaches a threshold. – Grow_shrink: Volume automatically grows or shrinks in response to the amount of used space.



• In addition, you can specify the following: – Maximum to grow (default is 120% of volume size) – Minimum to shrink (default is volume size) – Grow and shrink thresholds



vol01



vol01



Lenovo Accredited Learning Provider



85



You can enable or disable automatic resizing of volumes. If you enable the capability, ONTAP automatically increases the capacity of the volume up to a predetermined maximum size. Space must be available in the containing aggregate to support the automatic growth of the volume. Therefore, if you enable automatic resizing, you must monitor the free space in the containing aggregate and add more when needed. The capability cannot be triggered to support Snapshot creation. If you attempt to create a Snapshot copy and the volume has insufficient space, the Snapshot creation fails, even when automatic resizing is enabled. For more information about using automatic resizing, see the SAN Administration Guide.



Lenovo Accredited Learning Provider



85



Lenovo DM Storage – Technical Workshop – Student Lesson Guide



FlexVol Volumes Versus FlexGroup Volumes How they differ at a high level



/vol1



/vol1/vol2



Managed storage as one volume



/FlexGroup



FlexVol volumes



FlexGroup volumes



 FlexVol volumes are owned by one node.



 A FlexGroup volume is a pool of FlexVol volumes.



 FlexVol volumes span one aggregate.



 FlexGroup volumes span multiple aggregates.



 Reads and writes are isolated to one volume, node, or aggregate.



 Reads and writes are balanced across all nodes and aggregates.



 A FlexVol volume is limited to 100 TB (system-dependent).



 A FlexGroup volume can be 20 PB (200 FlexVol volumes).



 FlexVol volumes are within one namespace, but with limits.



 FlexVol groups are within one namespace, almost without limits.



Lenovo Accredited Learning Provider



86



Although ONTAP FlexGroup volumes are positioned as a capacity feature, FlexGroup volumes are also a high-performance feature. With a FlexGroup volume, you can have massive capacity, predictably low latency, and high throughput for the same storage container. A FlexGroup volume adds concurrency to workloads and presents multiple volume affinities to a single storage container with no need for increased management.



Lenovo Accredited Learning Provider



86



Lenovo DM Storage – Technical Workshop – Student Lesson Guide



Balanced Placement Balanced LUN and volume placement based on application requirements



Application-Aware Provisioning (Template-Based)



Balanced Placement



• Simplified provisioning • Balanced use of cluster storage and CPU (node headroom) resources • Balanced placement that depends on the following: – QoS – Headroom availability



• Balanced placement logic that needs the following inputs: – Storage Level Classes: Extreme, High, or Value (Capacity) – Protection Level Classes: sync or async – Size of application or application components Lenovo Accredited Learning Provider



87



Balanced placement simplifies provisioning by eliminating questions such as the following:  Where is the capacity to match my application I/O requirements?  Which node or nodes have CPU headroom to take on additional work? Identifying remaining performance capacity Performance capacity, or headroom, measures how much work you can place on a node or an aggregate before performance of workloads on the resource begins to be affected by latency. Knowing the available performance capacity on the cluster helps you provision and balance workloads. Identifying high-traffic clients or files You can use ONTAP Active Objects technology to identify clients or files that are responsible for a disproportionately large amount of cluster traffic. Once you have identified these "top" clients or files, you can rebalance cluster workloads or take other steps to resolve the issue. Guaranteeing throughput with QoS You can use storage quality of service (QoS) to guarantee that performance of critical workloads is not degraded by competing workloads. You can set a throughput ceiling on a competing workload to limit its impact on system resources, or set a throughput floor for a critical workload, ensuring that it meets minimum throughput targets, regardless of demand by competing workloads. You can even set a ceiling and floor for the same workload.



Lenovo Accredited Learning Provider



87



Lenovo DM Storage – Technical Workshop – Student Lesson Guide



Volume Move • Rules: – To any aggregate in the cluster – Only within the SVM – Nondisruptively to the client



• Use cases: aggr1 aggr5 aggr2



aggr3 aggr6 aggr4



– Capacity: Move a volume to an aggregate with more space. – Performance: Move a volume to an aggregate with different performance characteristics. – Servicing: Move volumes to newly added nodes or from nodes that are being retired.



Lenovo Accredited Learning Provider



88



FlexVol volumes can be moved from one aggregate or node to another within the same storage virtual machine (SVM). A volume move does not disrupt client access during the move. You can move volumes for capacity use, such as when more space is needed. You can move volumes to change performance characteristics, such as from a controller with HDDs to one that uses solid-state drives (SSDs). You can also move volumes during service periods.



Lenovo Accredited Learning Provider



88



Lenovo DM Storage – Technical Workshop – Student Lesson Guide



Volume Rehost Within a Cluster



Audit



Docs



Tax



Database



Docs



(SVM) Test



(SVM) Finance



Steps to rehost a volume: 1.



Identify the source volume and SVM.



2.



Identify the destination SVM within the cluster.



3.



Prevent access to the volume that is being rehosted.



4.



Rehost the volume to the destination SVM by using the rehost command.



5.



Configure access to the volume in the destination SVM.



ONTAP



Destination Cluster Lenovo Accredited Learning Provider



89



The volume rehost command rehosts a volume from a source SVM to a destination SVM. The volume name must be unique among the other volumes on the destination SVM. If the volume contains a LUN, you can specify that the LUN needs to be unmapped. In addition, you can specify whether you want the LUN to be automatically remapped on the destination SVM. NOTE: Volume rehost is a disruptive operation and requires you to reconfigure access to the volume at the destination. Access to the volume must be prevented before a rehost to prevent data loss or inconsistency.



Lenovo Accredited Learning Provider



89



Lenovo DM Storage – Technical Workshop – Student Lesson Guide



DATA ACCESS SAN / NAS



Lenovo Accredited Learning Provider



90



Lenovo DM Storage – Technical Workshop – Student Lesson Guide



Unified Storage Review File System (SAN)



Corporate LAN



NFS SMB



iSCSI FCoE



File System (NAS) NAS (File-Level Access)



ONTAP DM



FC SAN (Block-Level Access)



Lenovo Accredited Learning Provider



91



NAS is a file-based storage system that uses NFS and SMB protocols to make data available over the network. CIFS is a dialect of SMB. A SAN is a block-based storage system that uses FC, FCoE, and iSCSI protocols to make data available over the network.



Lenovo Accredited Learning Provider



91



Lenovo DM Storage – Technical Workshop – Student Lesson Guide



SAN Host



A LUN is a logical representation of a hard disk.



Storage



Centrally managed storage



Host Cluster



LUN LUN



Locally Attached Hard Disk



Centrally managed data protection



Lenovo Accredited Learning Provider



92



In an application server environment, locally attached hard disks, also called directattached storage (DAS), are separately managed resources. In an environment with more than one application server, each server’s storage resource also needs to be managed separately. A SAN provides access to a LUN, which represents a SCSI-attached hard disk. The host operating system partitions, formats, writes to, and reads from the LUN as if the LUN were any other locally attached disk. The advantages of using SAN storage include support for clustered hosts, where shared disks are required, and centrally managed resources. In the example, if the administrator did not use a SAN, the administrator would need to manage separate resources for each application server and host cluster. As well as enabling centrally managed resources, SAN uses ONTAP Snapshot copy technology to enable centrally managed data protection.



Lenovo Accredited Learning Provider



92



Lenovo DM Storage – Technical Workshop – Student Lesson Guide



Connecting Initiator to Target How can you connect an initiator to a target? Direct or Indirect access



Initiator



Eth



Connected Through a Switch



Target (SVM)



e0a



Application File System SCSI Driver



FC



 Disk 1 (C:)  Disk 2 (E:) LUN



Connected Through a Switch



SAN Services HA WAFL



0a



LUN



FlexVol Volume



Lenovo Accredited Learning Provider



93



ONTAP supports the iSCSI, FC,, FCoE and NVMe protocols. In this course, we use FC and iSCSI protocols. Data is communicated over ports and LIFs.  In an Ethernet SAN, the data is communicated using Ethernet ports.  In an FC SAN, the data is communicated over FC ports.  For FCoE, the initiator has a converged network adapter (CNA), and the target has a unified target adapter (UTA).  SAN data LIFs do not migrate or fail over the way that NAS does. However, the LIFs can be moved to another node or port in the SVM. ONTAP best practices:  Use at least one LIF per node, per SVM, per network.  Use redundant connections to connect the initiator to the target.  Use redundantly configured switched networks to ensure resiliency if a cable, port, or switch fails.



Lenovo Accredited Learning Provider



93



Lenovo DM Storage – Technical Workshop – Student Lesson Guide



iSCSI/FC Architecture Multipathing software is required.



Initiator Ethernet / FC



LIF



 Disk 1 (C:)  Disk 2 (E:)



The LUN



LIF



Initiator group (igroup): LUN



FlexVol Volume



Initiator node name: iqn/WWPN Protocol Type: iSCSI / FC OS Type: eg. Windows



Target: Data SVM Each node requires minimumone LIF Lenovo Accredited Learning Provider



94



Initiator groups (igroups) are tables of FC protocol host WWPNs or iSCSI host node names. You can define igroups and map the igroups to LUNs to control which initiators have access to LUNs. In the example, the initiator uses the iSCSI protocol to communicate with the target. Typically, you want all of the host’s initiator ports or software initiators to have access to a LUN. In the example, there is a single host. The iSCSI Software Initiator iSCSI Qualified Name (IQN) is used to identify the host. Initiator groups can have multiple initiators, and multiple igroups can have the same initiator. However, you cannot map a LUN to multiple igroups that have the same initiator. An initiator cannot be a member of igroups of differing OS types. In the example, the initiator runs Windows. When multiple paths are created between the storage controllers and the host, the LUN is seen once through each path. When a multipath driver is added to the host, the multipath driver can present the LUN as a single instance. The figure illustrates two paths. The multipath driver uses asymmetric logical unit access (ALUA) to identify the path to the node where the LUN is located as the active “direct path.” The direct path is sometimes called the “optimized path.” The active path to the node where the LUN is not located is called the “indirect path.” The indirect path is sometimes called the “non-optimized path.” Because indirect paths must transfer I/O over the cluster interconnect, which might increase latency, ALUA uses only the direct paths, unless no direct path is available. ALUA never uses both direct and indirect paths to a LUN. Lenovo Accredited Learning Provider



94



Lenovo DM Storage – Technical Workshop – Student Lesson Guide



iSCSI / FC Implementation Steps 1. 2. 3. 4.



Enable the iSCSI / FC functionality on the SVM (first time only) Create or identify the necessary resources (eg LIFs). Map the LUN to the appropriate igroup. Locate the LUN on the host computer and prepare the disk.



Lenovo Accredited Learning Provider



95



The figure shows the basic process for implementing the iSCSI protocol between an initiator and an ONTAP storage system. The process consists of several steps. First, you need to enable the iSCSI functionality and then enable the feature on the SVM. You must also identify the software initiator’s node name. Second, you need resources to map, so you create a volume, LUN, igroup, and data LIFs. Third, you determine which hosts have access to the resources and map the hosts to the LUN. Finally, the LUN is discovered on the host and prepared.



Lenovo Accredited Learning Provider



95



Lenovo DM Storage – Technical Workshop – Student Lesson Guide



Windows iSCSI Implementation Identify the iSCSI node name



iSCSI initiator name iqn.1991-05.com.microsoft:w2k12.learn.netapp.local



The prompt might appear the first time that you start the iSCSI initiator.



Lenovo Accredited Learning Provider



96



The iSCSI Software Initiator creates the iSCSI connection on the Windows host. The iSCSI Software Initiator is built in to Windows Server 2008 and Windows Server 2012. If the system has not used an iSCSI Software Initiator before, a dialog box appears, which requests that you turn on the service. Click Yes. The iSCSI Initiator Properties dialog box appears. You need to identify the iSCSI initiator name before you start the SVM create wizard.



Lenovo Accredited Learning Provider



96



Lenovo DM Storage – Technical Workshop – Student Lesson Guide



LUN Creation Wizard: iSCSI LUN Creation



Click



Click



Lenovo Accredited Learning Provider



97



You view the steps to create an LUN for an iSCSI environment. The iSCSI protocol can also be enabled on an existing SVM by using ThinkSystem Storage Manager or the vserver iscsi create –vserver command. Verify that the operational status of the iSCSI service on the specified SVM is up and ready to serve data.



Lenovo Accredited Learning Provider



97



Lenovo DM Storage – Technical Workshop – Student Lesson Guide



LUN Creation Wizard: iSCSI Configure iSCSI protocol: - Define iSCSI initiator



Click ‘Save’ To create LUN And map to host



Lenovo Accredited Learning Provider



98



The SVM creation wizard automatically creates a LIF on each node of the cluster. IP addresses can be assigned manually or automatically by selecting a subnet. To verify or modify the LIF configuration, select “Review or modify LIF configuration.” To create an iSCSI LIF manually, using either ThinkSystem Storage Manager or the CLI, you must specify the -role parameter as data and the –data-protocol parameter as iscsi. CLI LIF creation example: DM3000::> network interface create -vserver svm_black -lif black_iscsi_lif1 -role data -data-protocol iscsi home-node rtp-nau-01 -home-port e0e –subnet snDefault The SVM creation wizard also enables you to provision a LUN for iSCSI storage. Enter the size, the LUN OS type, and the IQN for the host initiator. NOTE: You should create at least one LIF for each node and each network on all SVMs that serve data with the iSCSI protocol. ONTAP recommends having network redundancy, either through multiple networks or link aggregation.



Lenovo Accredited Learning Provider



98



Lenovo DM Storage – Technical Workshop – Student Lesson Guide



LUN Creation Wizard: iSCSI LUN summary



SVM



Volume name Initiator (host)



Lenovo Accredited Learning Provider



Lenovo Accredited Learning Provider



99



99



Lenovo DM Storage – Technical Workshop – Student Lesson Guide



The NAS File System Client UNIX1



Client /mnt/vol01 1 1 1 1 1 0 0 0 0 01 1 1 1 010 100 101 011 010 100 101 01 1 1



WIN1 1 1 1 1 1 0 0 0 0 01 1 1 1 010 100 101 011 010 100 1 1 01 1 1 11



Disk 1 (C:) Disk 2 (E:) \\svm\vol02



2-node Cluster



NFS Volume



SMB Volume



Lenovo Accredited Learning Provider



100



NAS is a distributed file system that enables users to access resources, such as volumes, on a remote storage system as if the resources were on a local computer system. NAS provides services through a client-server relationship. Storage systems that enable file systems and other resources to be available for remote access are called servers. The server is set up with a network address and provides file-based data storage to other computers, called clients, that use the server resources. The ONTAP software supports the NFS and SMB protocols. (SMB is also known as CIFS.)



Lenovo Accredited Learning Provider



100



Lenovo DM Storage – Technical Workshop – Student Lesson Guide



NFS Client UNIX1



/mnt/vol01 1 1 1 1 1 0 0 0 0 01 1 1 1 010 100 0 0 1 1 11 010 100 101 01 1 1



 vol01 is exported to UNIX1 with read/write permission.  UNIX1 mounts vol01 to /mnt/project with read/write permission.



vol01



Lenovo Accredited Learning Provider



101



NFS is a distributed file system that enables users to access resources, such as volumes, on remote storage systems as if the resources were on a local computer system. NFS provides services through a client-server relationship.  Storage systems that enable the file systems and other resources to be available for remote access are called servers.  The computers that use a server's resources are called clients.  The procedure of making file systems available is called exporting.  The act of a client accessing an exported file system is called mounting. When a client mounts a file system that a server exports, users on the client computer can view and interact with the mounted file systems on the server within the permissions granted.



Lenovo Accredited Learning Provider



101



Lenovo DM Storage – Technical Workshop – Student Lesson Guide



NFS Implementation Steps 1. Enable the NFS functionality on the SVM (only first time) 2. Create or identify the necessary resources (eg. LIF) 3. Export the available resources. 4. Configure NFS authentication (export policy) 5. Authorize the user. 6. Mount the exported resources.



Lenovo Accredited Learning Provider



102



The figure shows the basic process for implementing the NFS protocol between a UNIX client and an ONTAP storage system. The process consists of several steps. First, you need to enable the NFS functionality and then enable the feature on the SVM. Second, you need resources to export, so you create volumes, qtrees, and data LIFs. Third, you determine which clients have which type of access to the resources. You need a way to authenticate client access and authorize users with appropriate permissions, including read-only or read/write. Finally, when the client has been granted access to the exported resource, the client mounts the resource and grants access to the users.



Lenovo Accredited Learning Provider



102



Lenovo DM Storage – Technical Workshop – Student Lesson Guide



SVM Create Wizard: NFS SVM basic details



Create



Lenovo Accredited Learning Provider



Lenovo Accredited Learning Provider



103



103



Lenovo DM Storage – Technical Workshop – Student Lesson Guide



SVM Create Wizard: NFS Configure the NFS protocol



Select an IP address from the subnet?



Lenovo Accredited Learning Provider



Lenovo Accredited Learning Provider



104



104



Lenovo DM Storage – Technical Workshop – Student Lesson Guide



SVM Create Wizard: NFS SVM administrator details Create an SVM administrator.



Lenovo Accredited Learning Provider



Lenovo Accredited Learning Provider



105



105



Lenovo DM Storage – Technical Workshop – Student Lesson Guide



CIFS Client WIN1 1 1 1 1 1 0 0 0 0 01 1 1 1 010 100 101 011 010 100 1 1 01 1 1 11



Disk 1 (C:) Disk 2 (E:) \\Team1\vol01



vol01



Lenovo Accredited Learning Provider



106



CIFS is an application-layer network file-sharing protocol that the Microsoft Windows operating system uses. CIFS enables users or applications to access, read, and write to files on remote computers like on a local computer. A user or application can send network requests to read and write to files on remote computers. Messages travel from the network interface card (NIC) of the user’s computer, through the Ethernet switch, to the NIC of the remote computer. CIFS provides access to files and directories that are stored on the remote computer, through sharing resources. The network read and write process, which is also called network I/O, is controlled by the rules of network protocols, such as IPv4 and IPv6.



Lenovo Accredited Learning Provider



106



Lenovo DM Storage – Technical Workshop – Student Lesson Guide



CIFS Implementation Steps 1. 2. 3. 4. 5. 6. 7.



Enable the CIFS functionality on the SVM (first time only) Create or identify the necessary resources (eg. LIF) Share the available resources. Check Date/Time, DNS Configure CIFS authentication. Authorize the user. Map the shared resources.



Lenovo Accredited Learning Provider



107



The figure shows the basic process for implementing the CIFS protocol between a Windows client and an ONTAP storage system. The process consists of several steps. First, you need to enable the CIFS functionality and then enable the feature on the SVM. Second, you need resources to share, so you create volumes, qtrees, and data LIFs. Third, you determine which clients have which type of access to the resources. You need a way to authenticate client access and authorize users with appropriate permissions, including read-only or read/write. Finally, when the client has been granted access to the shared resource, the client maps the resource and grants access to the users.



Lenovo Accredited Learning Provider



107



Lenovo DM Storage – Technical Workshop – Student Lesson Guide



SVM Create Wizard: CIFS SVM basic details



Create



Select



Lenovo Accredited Learning Provider



Lenovo Accredited Learning Provider



108



108



Lenovo DM Storage – Technical Workshop – Student Lesson Guide



SVM Create Wizard: CIFS SVM basic details



Click



Lenovo Accredited Learning Provider



Lenovo Accredited Learning Provider



109



109



Lenovo DM Storage – Technical Workshop – Student Lesson Guide



SVM Create Wizard: CIFS Configure the CIFS protocol



Lenovo Accredited Learning Provider



Lenovo Accredited Learning Provider



110



110



Lenovo DM Storage – Technical Workshop – Student Lesson Guide



SVM Create Wizard: CIFS SVM administrator details



Lenovo Accredited Learning Provider



Lenovo Accredited Learning Provider



111



111



Lenovo DM Storage – Technical Workshop – Student Lesson Guide



Snapshot Copies



Lenovo Accredited Learning Provider



112



Lenovo DM Storage – Technical Workshop – Student Lesson Guide



Snapshot Copy Technology Create Snapshot copy 1



File or LUN



Volume



Blocks on Disk



A



A



B



B



C



C



1.



Create Snapshot copy 1:



– Pointers are copied. – No data is moved.



A B C Snapshot Copy 1 Lenovo Accredited Learning Provider



113



Understanding the technology that is used to create a Snapshot copy helps you to understand how space is used. Furthermore, understanding the technology also helps you to understand features such as FlexClone technology, deduplication, and compression. A Snapshot copy is a local, read-only, point-in-time image of data. Snapshot copy technology is a built-in feature of WAFL storage virtualization technology and provides easy access to old versions of files and LUNs. When ONTAP creates a Snapshot copy, ONTAP starts by creating pointers to physical locations. The system preserves the inode map at a point in time and then continues to change the inode map on the active file system. ONTAP then retains the old version of the inode map. No data is moved when the Snapshot copy is created. Snapshot technology is highly scalable. A Snapshot copy can be created in a few seconds, regardless of the size of the volume or the level of activity on the ONTAP storage system. After the copy is created, changes to data objects are reflected in updates to the current version of the objects, as if the copy did not exist. Meanwhile, the Snapshot copy of the data remains stable. A Snapshot copy incurs no performance overhead. Users can store as many as 255 Snapshot copies per volume. All the Snapshot copies are accessible as read-only and online versions of the data.



Lenovo Accredited Learning Provider



113



Lenovo DM Storage – Technical Workshop – Student Lesson Guide



Snapshot Copy Technology Continue writing data



C’



Volume



Blocks on Disk



A



A



B



B



C



C



1. 2.



Create Snapshot copy 1. Continue writing data:



• Data is written to a new location on the disk. • Pointers are updated.



C’



A B C Snapshot Copy 1 Lenovo Accredited Learning Provider



114



When ONTAP writes changes to disk, the changed version of block C is written to a new location. In the example, C’ is the new location. ONTAP changes the pointers rather than moving data. The file system avoids the parity update changes that are required if new data is written to the original location. If the WAFL file system updated the same block, then the system would need to perform multiple parity reads to update both parity disks. The WAFL file system writes the changed block to a new location, again writing in complete stripes and without moving or changing the original data blocks.



Lenovo Accredited Learning Provider



114



Lenovo DM Storage – Technical Workshop – Student Lesson Guide



Snapshot Copy Technology Create Snapshot copy 2



Volume



Blocks on Disk



A



A



B



B



C’



C



1. 2. 3.



Create Snapshot copy 1. Continue writing data. Create Snapshot copy 2:



• Pointers are copied. • No data is moved.



C’



A



A



B



B



C



C’



Snapshot Copy 1



Snapshot Copy 2



Lenovo Accredited Learning Provider



115



When ONTAP creates another Snapshot copy, the new Snapshot copy points only to the unchanged blocks A and B and to block C’. Block C’ is the new location for the changed contents of block C. ONTAP does not move any data; the system keeps building on the original active file system. The method is simple and so is good for disk use. Only new and updated blocks use additional block space.



Lenovo Accredited Learning Provider



115



Lenovo DM Storage – Technical Workshop – Student Lesson Guide



Snapshot Copy Design • • • • •



Understand that Snapshot copy design is highly dependent on the customer environment. Study the customer recovery time objective (RTO) and recovery point objective (RPO) requirements. Do not create more Snapshot copies than necessary – max 1023 per volume Check and adjust the aggregate and volume Snapshot copy reserve defaults. To control storage consumption, configure Snapshot copy automatic deletion and volume automatic increase.



Lenovo Accredited Learning Provider



116



Snapshot copies are the first line of defense against accidental data loss or inconsistency. Before you implement a Snapshot copy solution, you should thoroughly understand the customer needs and environment. Each customer has unique requirements for the recovery time objective (RTO) and recovery point objective (RPO). RTO The RTO is the amount of time within which the service, data, or process must be made available again to avoid undesirable outcomes. RPO The RPO is a point to which data must be restored or recovered to be acceptable to the organization’s acceptable data loss policy. To provide efficient use of disk space, deploy only the required number of Snapshot copies on each volume. If you deploy more Snapshot copies than are required, the copies consume more disk space than necessary. You might need to adjust default settings for the Snapshot copy reserve for volumes and aggregates: • The Snapshot copy reserve guarantees that you can create Snapshot copies until the reserved space is filled. • When Snapshot copies fill the reserved space, then Snapshot blocks compete for space with the active file system.



Lenovo Accredited Learning Provider



116



Lenovo DM Storage – Technical Workshop – Student Lesson Guide



The Snapshot Policy Automatically manage Snapshot copy schedules and retention.



Snapshot Policy Job Schedule



SVM Cluster



FlexVol Volume



Lenovo Accredited Learning Provider



117



A Snapshot policy enables you to configure the frequency and maximum number of Snapshot copies that are created automatically:  You can create Snapshot polices as necessary.  You can apply one or more schedules to the Snapshot policy.  The Snapshot policy can have zero schedules. When you create an SVM, you can specify a Snapshot policy that becomes the default for all FlexVol volumes that are created for the SVM. When you create a FlexVol volume, you can specify which Snapshot policy you want to use, or you can enable the FlexVol to inherit the SVM Snapshot policy. The default Snapshot policy might meet your needs. The default Snapshot copy policy is useful if users rarely lose files. For typical systems, only 5% to 10% of the data changes each week: six daily and two weekly Snapshot copies consume 10% to 20% of disk space. Adjust the Snapshot copy reserve for the appropriate amount of disk space for Snapshot copies. Each volume on an SVM can use a different Snapshot copy policy. For active volumes, create a Snapshot schedule that creates Snapshot copies every hour and keeps them for just a few hours, or turn off the Snapshot copy feature. You back up Snapshot copies to the vault destination. If an empty label ("") is specified, the existing label is deleted.



Lenovo Accredited Learning Provider



117



Lenovo DM Storage – Technical Workshop – Student Lesson Guide



Typical Workflow 1. 2. 3.



Create a job schedule or use the default. Create a Snapshot policy, and then specify the job schedule. Assign the Snapshot policy to a FlexVol volume, or inherit a Snapshot policy from the SVM.



Lenovo Accredited Learning Provider



Lenovo Accredited Learning Provider



118



118



Lenovo DM Storage – Technical Workshop – Student Lesson Guide



Snapshot Copy Technology Restore from a Snapshot copy



Volume



Blocks on disk



A



A



B



B



C’ C



C











C’







A



A



B



B



C



C’



Snapshot Copy 1



Snapshot Copy 2



To restore a file or LUN from Snapshot copy 1, use SnapRestore data recovery software. Snapshot copies that were created after Snapshot copy 1 are deleted. Unused blocks on disk are made available as free space.



Lenovo Accredited Learning Provider



119



Suppose that after the Snapshot copy is created, the file or LUN becomes corrupted, which affects logical block C’. If the block is physically bad, RAID can manage the issue without recourse to the Snapshot copies. In the example, block C’ becomes corrupted because part of the file is accidentally deleted. You want to restore the file. To easily restore data from a Snapshot copy, use the SnapRestore feature. SnapRestore technology does not copy files. SnapRestore technology moves pointers from files in the good Snapshot copy to the active file system. The pointers from the good Snapshot copy are promoted to become the active file system pointers. When a Snapshot copy is restored, all Snapshot copies that were created after that Snapshot copy are destroyed. The system tracks links to blocks on the WAFL system. When no more links to a block exist, the block is available for overwrite and is considered free space. Because a SnapRestore operation affects only pointers, the operation is quick. No data is updated, nothing is moved, and the file system frees any blocks that were used after the selected Snapshot copy. SnapRestore operations generally require less than one second. To recover a single file, the SnapRestore feature might require a few seconds or a few minutes.



Lenovo Accredited Learning Provider



119



Lenovo DM Storage – Technical Workshop – Student Lesson Guide



Snapshot Visibility to Clients Enable client access to a Snapshot directory.



DM3000::> vol modify –vserver svm_Team1 –volume Team1_vol002 –snapdir-access true DM3000::> vserver cifs share modify –vserver svm_Team1 –share-name Team1_vol2 –share-properties showsnapshot Lenovo Accredited Learning Provider



120



CLI commands are available to control visibility from NAS clients of Snapshot directories on a volume. NOTE: Show Hidden Files and Folders must be enabled on your Windows system. Access to .snapshot and ~snapshot is controlled at the volume level by setting the – snapdir-access switch. You can also control access to ~snapshot from CIFS clients at the share level with the showsnapshot share property.



Lenovo Accredited Learning Provider



120



Lenovo DM Storage – Technical Workshop – Student Lesson Guide



Introducing FlexClone  Application development and test – Instantaneous replica of a volume – Any database environment – Commercial & Embedded Software Development



– ERP, CRM, Decision Support applications



 Server and desktop virtualization – Instantaneous replica of a file or LUN – Create low overhead “gold images” – Provision thousands of virtual desktops Lenovo Accredited Learning Provider



121



FlexClone software represents a dramatic breakthrough when you need to have multiple copies of data which can be modified for various uses in your organization. It is particularly well suited for: • Software application development and test where multiple developers need individual copies of production level data to work with. • Provisioning for virtual infrastructure where many copies of similar but not identical data set images are needed.



Lenovo Accredited Learning Provider



121



Lenovo DM Storage – Technical Workshop – Student Lesson Guide



Volume Level Cloning



FlexClone®



Cloned volume: a transparent writable layer in front of Snapshot™



Snapshot of the volume Snapshot



Active volume to be cloned Active volume Lenovo Accredited Learning Provider



122



FlexClone replicas can be created for complete data volumes or at a more granular level, if needed. This graphic represents volume level cloning. In terms of technology, you can think of a volume level FlexClone as a transparent, writable layer which sits in front of a Snapshot copy. Like the Snapshot on which it is based, FlexClone simply points to any shared data in the source volume, so the only new storage capacity needed is for unique changes stored for each cloned copy.



Lenovo Accredited Learning Provider



122



Lenovo DM Storage – Technical Workshop – Student Lesson Guide



File and LUN Level Cloning



Cloned file: an independent, space efficient data replica FlexClone®



File or LUN to be cloned File or LUN in active volume Lenovo Accredited Learning Provider



123



In some cases, you may need to create data replicas at a more granular level than complete volumes. FlexClone provides the option to create clones at the file or LUN level and in this case, there is no dependency upon a Snapshot copy of the parent volume. As in the volume cloning example, however, file and LUN clones are 100% space efficient and can be created instantly.



Lenovo Accredited Learning Provider



123



Lenovo DM Storage – Technical Workshop – Student Lesson Guide



Storage Efficiency



Lenovo Accredited Learning Provider



124



Lenovo DM Storage – Technical Workshop – Student Lesson Guide



Proven Efficiency Drive storage cost reductions: Lower CapEx and OpEx



1PB of storage in a 2U rack space Data growth RAID-TEC™ and RAID DP® technologies



Storage Cost



Snapshot® technology Thin provisioning Cloning Inline deduplication Inline adaptive compression



2+ racks







event log show



• Each event contains the following: – – – –



Message name Severity level Description Corrective action, if applicable



Lenovo Accredited Learning Provider



163



The event management system (EMS) collects and displays information about events that occur in a cluster. You can manage the event destination, event route, mail history records, and SNMP trap history records. You can also configure event notification and logging.



Lenovo Accredited Learning Provider



163



Lenovo DM Storage – Technical Workshop – Student Lesson Guide



System Logs • Log messages can be sent to the following: – The console – The message log



• You can access the message log by using the following: – The debug log command – ThinkSystem Storage Manager – A web browser: http://cluster-mgmt-ip/spi/svl-nau-01/etc/log/



Use the debug log command to browse the messages.log file. Lenovo Accredited Learning Provider



164



The system log contains information and error messages that the storage system displays on the console and logs in message files.



Lenovo Accredited Learning Provider



164



Lenovo DM Storage – Technical Workshop – Student Lesson Guide



Console on Boot SP node2> system console Type Ctrl-D to exit. LOADER> LOADER> boot_ontap ... ******************************* * * * Press Ctrl-C for Boot Menu. * * * ******************************* ...



Lenovo Accredited Learning Provider



165



Typical boot sequence: 1. Loads the kernel into memory from the boot device 2. Mounts the “/” root image from rootfs.img on the boot device 3. Loads Init and runs start-up scripts 4. Loads NVRAM kernel modules 5. Creates /var partition on NVRAM (restored from the boot device if a backup copy exists) 6. Starts management processes 7. Loads the data and network modules 8. Mounts the vol0 root volume 9. Is ready for use



Lenovo Accredited Learning Provider



165



Lenovo DM Storage – Technical Workshop – Student Lesson Guide



Boot Menu



Boot Menu will be available. Please choose one of the following: (1) Normal Boot. (2) Boot without /etc/rc. (3) Change password. (4) Clean configuration and initialize all disks. (5) Maintenance mode boot. (6) Update flash from backup config. (7) Install new software first. (8) Reboot node. (9) Configure Advanced Drive Partitioning Selection (1-9)?



Lenovo Accredited Learning Provider



166



To access the boot menu, you must press Ctrl+C. Select one of the following options by entering the corresponding number: 1. Normal Boot: Continue to boot the node in normal mode. 2. Boot without /etc/rc: This option is obsolete—it does not affect the system. 3. Change password: Change the password of the node, which is also the “admin” account password. 4. Clean configuration and initialize all disks: Initialize the node disks and create a root volume for the node. NOTE: This menu option erases all data on the disks of the node and resets your node configuration to the factory default settings. 5. Maintenance mode boot: Perform aggregate and disk maintenance operations and obtain detailed aggregate and disk information. To exit Maintenance mode, use the halt command. 6. Update flash from backup config: Restore the configuration information from the node’s root volume to the boot device. 7. Install new software first: Install new software on the node. NOTE: This menu option is only for installing a newer version of ONTAP software on a node that has no root volume installed. Do not use this menu option to upgrade ONTAP. 8. Reboot Node: Reboot the node.



Lenovo Accredited Learning Provider



166



Lenovo DM Storage – Technical Workshop – Student Lesson Guide



Additional Software for DM Series Storage - Overview



Lenovo Accredited Learning Provider



167



Lenovo DM Storage – Technical Workshop – Student Lesson Guide



Basic



Complexity of Configuration Complex



Management Software Portfolio



Management at Scale, Automated Storage Processes, and Data Protection



Unified Manager and Workflow Automation Simple, Web-Based Tool that Requires No Storage Expertise



ThinkSystem Storage Manager ONTAP Storage Lenovo Accredited Learning Provider



168



There are many management tools from which to choose. Although ThinkSystem Storage Manager provides simplified device-level management and OnCommand Unified Manager can be used to monitor cluster resources at scale, both products are used to monitor only ONTAP storage systems. ONTAP OnCommand Insight enables storage resource management, including configuration and performance management and capacity planning, along with advanced reporting for heterogeneous environments.



Lenovo Accredited Learning Provider



168



Lenovo DM Storage – Technical Workshop – Student Lesson Guide



ThinkSystem Storage Manager Dashboard



Lenovo Accredited Learning Provider



169



The ThinkSystem Storage Manager dashboard shows at-a-glance system status for a storage system. The dashboard displays vital storage information, including efficiency and capacity use for various storage objects, such as aggregates and volumes.



Lenovo Accredited Learning Provider



169



Lenovo DM Storage – Technical Workshop – Student Lesson Guide



OnCommand Unified Manager



Lenovo Accredited Learning Provider



170



By using OnCommand Unified Manager, you can configure global threshold values for all of your aggregates and volumes to track any threshold breaches. Events are notifications that are generated automatically when a predefined condition occurs or when an object crosses a threshold. The events enable you to act to prevent issues that can lead to poor performance and system unavailability. Events include an impact area, severity, and impact level. Events are categorized by the type of impact area, such as availability, capacity, configuration, or protection. You can create alerts to notify you when a particular event is generated. You can create alerts for a single resource, group of resources, and events of a particular severity type. You can specify the frequency with which you want to be notified. You can integrate OnCommand Workflow Automation with Unified Manager to run workflows for your storage classes. You can also monitor storage virtual machines (SVMs) that have an infinite volume but do not have storage classes. When Unified Manager is integrated with OnCommand Workflow Automation, the reacquisition of Workflow Automation cached data is triggered.



Lenovo Accredited Learning Provider



170



Lenovo DM Storage – Technical Workshop – Student Lesson Guide



SnapCenter software Unified, scalable software for data protection and clone management



Host



Host



Host



PlugIn



Plug-In



PlugIn



SnapCenter SnapCenter



® Server(s) SnapCenter Server(s) Server



NetApp® Storage Systems



Community Supported: Management Data



Lenovo Accredited Learning Provider



171



SnapCenter is management software developed by NetApp that represents the upcoming generation of application-centric data protection management for DM Series storage. It is designed for the Data Fabric and offers a modern architecture with simplicity and performance in mind. Standard licenses are included with the purchase of the DM Series Storage system. Application integration includes plug-ins for the following applications: - Oracle database - SAP HANA - Microsoft SQL Server - Microsoft Exchange Server (requires SnapCenter 4.0+) - VMware VMs A plug-in creator capability allows customers to create plug-ins for in-house applications or other applications of choice. Many plug-ins are already available and supported, including: - DB2 - MongoDB (July 2017) - MySQL



Lenovo Accredited Learning Provider



171



Lenovo DM Storage – Technical Workshop – Student Lesson Guide



Let’s break down the diagram: • In the middle, we are introducing a new product, the SnapCenter Server: – The SnapCenter Server is architected for centralized management and high availability and load balancing. – SnapCenter also provides a common GUI for ease of management across the IT infrastructure and provides role-based access control (RBAC) for delegated management while preserving central oversight. • On the left, we are showing the SnapCenter plug-ins that are installed on each host by using the SnapCenter Server: – These lightweight application, database, and OS plug-ins offer role-specific functions and workflows. • SnapCenter Server and plug-ins talk to ONTAP based storage. SnapCenter software is designed for multiplatform support, meaning that eventually it might work with more than ONTAP supported storage systems. • Along with multiplatform support, SnapCenter is also designed to support multiple hypervisors.



Lenovo Accredited Learning Provider



171



Lenovo DM Storage – Technical Workshop – Student Lesson Guide



Other Tools • Config Advisor – For validation of ONTAP Cluster configuration after the installation or Operational Healthcheck



• Host Utilites for Windows & Linux – Multipath drivers



• AntiVirus report – This package enables support to configure AntiVirus scanning on ONTAP enabled SVMs



• Virtual Storage Console: – This is a product suite that provides support as a VASA Provider and SRA in vCenter Server.



• SnapDrive for Windows & Linux • MIBs • MetroCluster Tiebreaker All of them available on https://datacentersupport.lenovo.com Lenovo Accredited Learning Provider



Lenovo Accredited Learning Provider



172



172



Lenovo DM Storage – Technical Workshop – Student Lesson Guide



Course Summary In the course you have learned about the following topics: • Lenovo DM Series Overview • ONTAP Cluster, Management, Network Configuration, Architecture • Data Access NAS / SAN • Snapshots • Storage Efficiency – Deduplication, Compression, Compaction • Disaster Recovery & Continuous Availability – SnapMirror, SnapVault, MetroCluster • Other ONTAP features – SnapLock, Quality of Service, FabricPool • Cluster Maintenance • Additional Software



Lenovo Accredited Learning Provider



Lenovo Accredited Learning Provider



173



173