SPEC SFS(R)2014_swbuild Result NetApp, Inc. : NetApp FAS8200 with FlexGroup SPEC SFS2014_swbuild = 520 Builds (Overall Response Time = 1.04 msec) =============================================================================== Performance =========== Business Average Metric Latency Builds Builds (Builds) (msec) Ops/Sec MB/Sec ------------ ------------ ------------ ------------ 52 0.4 26002 413 104 0.6 52004 826 156 0.6 78006 1239 208 0.7 104007 1652 260 0.8 130010 2065 312 0.9 156013 2478 364 1.0 182014 2891 416 1.3 208017 3304 468 1.9 234019 3717 520 2.7 260020 4130 =============================================================================== Product and Test Information ============================ +---------------------------------------------------------------+ | NetApp FAS8200 with FlexGroup | +---------------------------------------------------------------+ Tested by NetApp, Inc. Hardware Available September 2017 Software Available September 2017 Date Tested August 2017 License Number 33 Licensee Locations Sunnyvale, CA USA Powered by ONTAP and optimized for scale, the FAS8200 hybrid-flash storage system enables you to quickly respond to changing needs across flash, disk, and cloud with industry-leading data management. FAS8200 systems, with integrated NVMe memory for flash acceleration, are engineered to deliver on core IT requirements for high performance and scalability as well as uptime, data protection, and cost-efficiency. The FlexGroup feature of ONTAP 9 enables you to massively scale in a single namespace to over 20PB with over 400 billion files while evenly spreading the performance across the cluster. This makes the FAS8200 a great system for engineering and design application as well as DevOps, especially workloads for chip development and software builds that are typically high file-count environments with high meta-data traffic. Solution Under Test Bill of Materials ===================================== Item No Qty Type Vendor Model/Name Description ---- ---- ---------- ---------- ---------- ----------------------------------- 1 2 Storage NetApp FAS8200 A single NetApp FAS8200 system is a Array Hybrid single chassis with 2 controllers. Flash Each set of 2 controllers comprises System (HA a High-Availability (HA) Pair. The Pair, words "controller" and "node" are Active- used interchangeably in this Active document. Each FAS8200 HA Pair Dual Contr includes 256GB of ECC memory, 4 oller) PCIe expansion slots and a set of onboard I/O ports: * 8 UTA2 ports configured as 10GbE, used for data (connections to load generators); * 4 10GbE ports, used for cluster interconnect; * 4 10GbE Base-T ports; * 8 12Gb SAS ports, used to connect to disk shelves. Included Premium Bundle which includes All Protocols, SnapRestore, SnapMirror, SnapVault, FlexClone, SnapManager Suite, Single Mailbox Recovery (SMBR), SnapCenter Foundation. Only NFS protocol license is active in the test, also available in the BASE bundle. 2 12 Disk NetApp Disk Shelf 12 drive bays in each shelf; 6 Shelves DS212C shelves per HA Pair 3 4 12Gb SAS NetApp X2069-R6 Used for connectivity to disk HBA HBA SAS shelves; 2 cards per HA pair; each 3/6/12Gbps card has 4 12Gb SAS ports. In the QSFP PCIe tested configuration, the on-board SAS ports were unused (connected to unused and inactive drive shelves), due to the configuration being part of a shared-infrastructure lab. No benchmark data flowed to or through the unused drive shelves. The PCIe- slot SAS HBA card was used to connect to the disk shelves mentioned above, which were an active part of the tested configuration. 4 4 Network NetApp X1117A- Used for cluster interconnect; 2 Interface EN-R6 NIC cards per HA pair; each card has 2 Card 2-port ports bare cage SFP+ 10GbE PCIe Card 5 53 SFP Transc Cisco Cisco Used in both Switches; 21 to eiver Avago clients, 16 to data ports on 10Gbase-SR storage controllers, and 16 to part SFBR- cluster interconnect on storage 7702SDZ controllers 6 21 SFP Transc NetApp 10GbE SFP Used in client NIC for data eiver X6569-R6 connection to switch 7 24 SFP Transc NetApp 10GbE SFP Used in Controllers for onboard eiver X6599A-R6 10Gb data connections; 8 per HA pair, plus this part was used in 4 onboard Cluster Interconnect ports per HA pair. 8 8 SFP Transc NetApp SFP Used in Controllers for 10GbE NIC eiver X6569-R6 card for cluster interconnect; 4 per HA pair 9 8 NVMe Flash NetApp Flash 1024 GB NVMe PCIe M.2 Flash Cache Cache Cache Modules, 4 per HA pair X3311A 10 144 Disk NetApp 4TB Disk 4TB 7200 RPM 12Gb SAS 3.5 inch HDD; Drives X336A 72 disks per HA pair (including 2 spares) 11 1 Switch Cisco Cisco Used for 10GbE data connections Nexus 7018 between clients and storage Switch systems. Large switch in use due to testing having been done in a large shared-infrastructure lab. Only the ports used for this test are listed in this report. 12 21 Client IBM IBM 3650m3 IBM server, each with 24GB main memory. 1 used as Prime Client; 20 used to generate the workload 13 1 Switch Cisco Cisco Used for 10GbE cluster Nexus 5596 interconnections 14 5760 Software E NetApp OS-ONTAP1- ONTAP Enablement Fee, Per 0.1TB, nablement/ CAP1-PREM- Capacity-based License License 2P Configuration Diagrams ====================== 1) sfs2014-20170908-00021.config1.jpg (see SPEC SFS2014 results webpage) Component Software ================== Item Name and No Component Type Version Description ---- ------------ ------------ ------------ ----------------------------------- 1 Linux OS Red Hat OS for the 21 clients Enterprise Linux 6.6 for x86_64 2 ONTAP Storage OS 9.2 Storage Operating System 3 Data Switch Operating 6.2(16) Cisco switch NX-OS (kickstart and System system software) 4 Cluster Operating 7.3(1) Cisco switch NX-OS (kickstart and Switch System system software) Hardware Configuration and Tuning - Physical ============================================ +----------------------------------------------------------------------+ | Storage | +----------------------------------------------------------------------+ Parameter Name Value Description --------------- --------------- ---------------------------------------- MTU 9000 Jumbo Frames configured for Cluster Interconnect ports Hardware Configuration and Tuning Notes --------------------------------------- NetApp FAS8200 storage controller 10Gb Ethernet ports used for cluster interconnections (8 per HA pair) are set up with MTU of 9000, Jumbo Frames. Software Configuration and Tuning - Physical ============================================ +----------------------------------------------------------------------+ | Clients | +----------------------------------------------------------------------+ Parameter Name Value Description --------------- --------------- ---------------------------------------- rsize,wsize 65536 NFS mount options for data block size protocol tcp NFS mount options for protocol nfsvers 3 NFS mount options for NFS version somaxconn 65536 Max tcp backlog an application can request tcp_fin_timeout 5 TCP time to wait for final packet before socket closed tcp_slot_table_ 128 number of simultaneous TCP Remote entries Procedure Call (RPC) requests tcp_max_slot_ta 128 number of simultaneous TCP Remote ble_entries Procedure Call (RPC) requests udp_slot_table_ 128 number of simultaneous UDP Remote entries Procedure Call (RPC) requests tcp_rmem 4096 87380 receive buffer size, min, default, max 8388608 tcp_wmem 4096 87380 send buffer size; min, default, max 8388608 aio-max-nr 1048576 maximum allowed number of events in all active async I/O contexts netdev_max_back 300000 max number of packets allowed to queue log Software Configuration and Tuning Notes --------------------------------------- Tuned the necessary parameters as shown above, for communication between clients and storage controllers over 10Gb Ethernet, to optimize data transfer and minimize overhead. Service SLA Notes ----------------- None Storage and Filesystems ======================= Item Stable No Description Data Protection Storage Qty ---- ------------------------------------- ------------------ -------- ----- 1 4TB 7200 RPM Drives used for data; 2x RAID-DP Yes 128 14+2 RAID-DP RAID groups per storage controller node in the cluster 2 4TB 7200 RPM drives used by ONTAP RAID-DP Yes 12 Operating system; 1x 1+2 RAID-DP RAID group per storage controller node in the cluster 3 128GB mSATA flash cards, 2 per HA none Yes 4 pair; used as boot media and also for de-staging NVMEM data from main memory to flash in event of failure condition Number of Filesystems 1 Total Capacity 330.4 TiB Filesystem Type NetApp FlexGroup Filesystem Creation Notes ------------------------- Each element (RAID Groups, aggregates) comprising the exported FlexGroup, along with the FlexGroup itself, was created using default values. Storage and Filesystem Notes ---------------------------- The storage configuration consisted of 2 FAS8200 HA pairs (4 FAS8200 controller nodes total). The two controllers in each HA pair were connected in a SFO (storage failover) configuration. Together, all 4 controllers (configured as 2 HA Pairs) comprise the tested FAS 8200 HA cluster. Stated in the reverse, the tested FAS 8200 HA cluster consists of 2 HA Pairs, each of which consists of 2 controllers (also referred to as nodes). Each storage controller was connected to its own and partner's disks in a multi-path HA configuration. Each storage controller was the primary owner of 36 disks, distributed across 6 drive shelves. Each storage controller was configured with 3 RAID groups. One aggregate was created on each RAID Group. The first two aggregates on each node held data for the file system. They were configured as 14+2 RAID-DP RAID groups. The third aggregate, built on a 1+2 RAID-DP RAID Group and referred to as the "root aggregate", on each controller, held ONTAP operating system related files. Additionally, each storage controller was allocated 1 spare disk, for a total of 4 spares disks in the tested FAS8200 HA cluster (note that the spare drives are not included in the "storage and filesystems" table because they held no data during the benchmark execution). A storage virtual machine or "SVM" was created on the cluster, spanning all storage controller nodes. Within the SVM, a single FlexGroup volume was created using the two data aggregates on each controller. A FlexGroup volume is a scale-out NAS container that provides high performance along with automatic load distribution and scalability. Transport Configuration - Physical ================================== Item Number of No Transport Type Ports Used Notes ---- --------------- ---------- ----------------------------------------------- 1 10GbE 36 For the client-to-storage network, the FAS8200 Cluster used a total of 16 10GbE connections from storage, communicating with NFSv3 over TCP/IP to 20 clients. The benchmark was conducted in a large shared-infrastructure lab; only the ports shown and documented were used on the Cisco Nexus 7018 switch for this benchmark test. 2 10GbE 16 The Cluster Interconnect network is connected via 10GbE to a Cisco 5596 switch, with 8 connections to each HA pair. Transport Configuration Notes ----------------------------- Each NetApp FAS8200 HA Pair used 8 10Gb Ethernet ports for data transport connectivity to clients (through the Cisco 7018 switch), Item 1 above. Each of the 20 clients driving workload used one 10Gb Ethernet port for data transport. All ports on the Item 1 network had MTU=1500 (default). The Cluster Interconnect network, Item 2 above, utilized MTU=9000 (Jumbo Frames). Switches - Physical =================== Total Used Item Port Port No Switch Name Switch Type Count Count Notes ---- -------------------- --------------- ------ ----- ------------------------ 1 Cisco Nexus 7018 10Gb Ethernet 48 37 21 client-side data Switch connections; 16 storage- side data connections. Only ports on Cisco Nexus 7018 Ethernet Modules needed for solution under test are included in Port Available count. 2 Cisco Nexus 5596 10Gb Ethernet 96 16 For Cluster Switch Interconnect; MTU=9000 Processing Elements - Physical ============================== Item No Qty Type Location Description Processing Function ---- ---- -------- -------------- ------------------------- ------------------- 1 4 CPU Storage 1.70 GHz Intel Xeon NFS, TCP/IP, RAID Controller D-1587 and Storage Controller functions 2 21 CPU Client 3.06 GHz Intel Xeon x5675 NFS Client, RedHat Linux OS Processing Element Notes ------------------------ Each NetApp FAS8200 Storage Controller contains 1 Intel Xeon D-1587 processor with 16 cores at 1.70 GHz. Each client contains 1 Intel Xeon x5675 processor with 6 cores at 3.06GHz. Memory - Physical ================= Size in Number of Description GiB Instances Nonvolatile Total GiB ------------------------- ---------- ---------- ------------ ------------ Main Memory for each 256 2 V 512 NetApp FAS8200 HA Pair NVMe Flash Cache memory; 1024 8 NV 8192 4TB per HA Pair Memory for each client; 24 21 V 504 20 of these drove the workload Grand Total Memory Gibibytes 9208 Memory Notes ------------ Each storage controller has main memory that is used for the operating system and caching filesystem data. The FlashCache module is a read cache used for caching filesystem data. See "Stable Storage" for more information. Stable Storage ============== The WAFL filesystem logs writes, and other filesystem data-modifying transactions, to a portion of controller main memory which we call NVMEM. In a storage-failover configuration (HA Pair), as in the system under test, such transactions are also logged (mirrored) to the NVMEM portion of main memory on the partner storage controller so that, in the event of a storage controller failure, any transactions on the failed controller can be completed by the partner controller. Filesystem operations are not acknowledged until after the storage system has confirmed that the related data are stored in the NVMEM portion of memory of both storage controllers (when both controllers are active). In case of power loss, an integrated battery ensures stable storage by providing power to the necessary components of the system long enough for data to be encrypted and securely flushed to an mSATA non-volatile flash device. NVMEM data does not flow over the Cluster Interconnect network. Data for NVMEM mirroring flows over an internal interconnect. Solution Under Test Configuration Notes ======================================= The cluster network was comprised of four 10GbE ports per controller (8 for each HA pair), connected via a Cisco Nexus 5596 switch. This provides high availability for the cluster network in case of port or link failure. Each storage controller had four 10GbE ports connected to the Cisco 7018 data switch; each client had one such 10GbE connection. The data and cluster networks were separate networks. All ports and interfaces on the data network had default size 1500-byte frames configured. All ports on the cluster interconnect network had MTU size 9000 configured. All clients accessed all file-systems from all the available network interfaces. Other Solution Notes ==================== All standard data protection features, including background RAID and media error scrubbing, software validated RAID checksum, and double disk failure protection via double parity RAID (RAID-DP) were enabled during the test. Dataflow ======== Please reference the configuration diagram. 20 clients were used to generate the workload; 1 client acted as Prime Client to control the 20 other clients. Each client has one 10GbE connection, through a Cisco Nexus 7018 switch. Each storage HA pair had 8 10GbE connections to the data switch. The filesystem consisted of one NetApp FlexGroup. The clients mounted the FlexGroup volume as an NFSv3 filesystem. The ONTAP cluster provided access to the FlexGroup volume on every 10GbE port connected to the data switch (16 ports total). Each client created mount points across those 16 ports symmetrically. Other Notes =========== None Other Report Notes ================== NetApp is a registered trademark and "Data ONTAP", "FlexGroup", and "WAFL" are trademarks of NetApp, Inc. in the United States and other countries. All other trademarks belong to their respective owners and should be treated as such. =============================================================================== Generated on Wed Mar 13 16:49:38 2019 by SpecReport Copyright (C) 2016-2019 Standard Performance Evaluation Corporation