SPEC SFS(R)2014_vda Result IBM Corporation : IBM Spectrum Scale 4.2 with Elastic Storage Server GL6 SPEC SFS2014_vda = 1600 Streams (Overall Response Time = 33.98 msec) =============================================================================== Performance =========== Business Average Metric Latency Streams Streams (Streams) (msec) Ops/Sec MB/Sec ------------ ------------ ------------ ------------ 160 8.6 1600 742 320 10.1 3201 1475 480 12.2 4801 2216 640 14.6 6401 2950 800 18.0 8002 3692 960 22.1 9602 4432 1120 31.0 11201 5170 1280 43.2 12801 5917 1440 66.9 14396 6643 1600 172.5 15942 7357 =============================================================================== Product and Test Information ============================ +---------------------------------------------------------------+ | IBM Spectrum Scale 4.2 with Elastic Storage Server GL6 | +---------------------------------------------------------------+ Tested by IBM Corporation Hardware Available December 2014 Software Available May 2016 Date Tested April 2016 License Number 11 Licensee Locations Almaden, CA USA IBM Spectrum Scale helps solve the challenge of explosive growth of unstructured data against a flat IT budget. Spectrum Scale provides unified file and object software-defined storage for high performance, large scale workloads on-premises or in the cloud. Built upon IBM's award winning General Parallel Filesystem (GPFS), Spectrum Scale includes the protocols, services and performance required by many industries, Technical Computing, Big Data, HDFS and business critical content repositories. IBM Spectrum Scale provides world-class storage management with extreme scalability, flash accelerated performance, and automatic policy-based storage tiering from flash through disk to tape, reducing storage costs up to 90% while improving security and management efficiency in cloud, big data & analytics environments. IBM Elastic Storage Server is an optimized disk storage solution bundled with IBM hardware and innovative IBM Spectrum Scale RAID technology that can perform fast background disk rebuilds in minutes with no impact to application performance. This solution also ensures data integrity from the application down to the storage solution with end to end checksum and provides unsurpassed end-to-end data availability, reliability and integrity with the data efficient advanced erasure coding. Solution Under Test Bill of Materials ===================================== Item No Qty Type Vendor Model/Name Description ---- ---- ---------- ---------- ---------- ----------------------------------- 1 8 Spectrum IBM X3650-M4 Spectrum Scale client nodes. Scale Client 2 1 Elastic IBM 5146-GL6 The ESS-GL6 contains 8247-22L IBM Storage Elastic Storage Server nodes and 6 Server GL6 DCS3700E storage expansion drawers. The storage expansion drawers were populated with a total of 348 2TB 7200 RPM NLSAS drives. The ESS also included 3 optional two-port (feature code #EL3D) FDR InfiniBand adapters per server node. 3 2 InfiniBand Mellanox SX6036 36-port non-blocking managed 56 Switch Gbps InfiniBand/VPI SDN switch. 4 1 Ethernet SMC SMC8150L2 50-port 10/100/1000 Gbps Ethernet Switch Networks switch. 5 8 InfiniBand Mellanox MCX456A-F 2-port PCI FDR InfiniBand adapter Adapter used in the client nodes. Configuration Diagrams ====================== 1) sfs2014-20160411-00012.config1.png (see SPEC SFS2014 results webpage) 2) sfs2014-20160411-00012.config2.png (see SPEC SFS2014 results webpage) Component Software ================== Item Name and No Component Type Version Description ---- ------------ ------------ ------------ ----------------------------------- 1 Client Nodes Spectrum 4.2.0.3 The Spectrum Scale File System is a Scale File distributed file system that runs System on both the Elastic Storage Server nodes and client nodes to form a cluster. The cluster allows for the creation and management of single namespace file systems. 2 Client Nodes Operating Red Hat The operating system on the client System Enterprise nodes was 64-bit Red Hat Enterprise Linux 7.2 Linux version 7.2. for x86_64 3 Elastic Storage 4.0 The ESS version 4.0 provides all of Storage Server the necessary software to be Server compatible with Spectrum Scale version 4.2.0.3 running on the client nodes. Hardware Configuration and Tuning - Physical ============================================ +----------------------------------------------------------------------+ | Spectrum Scale Client Nodes | +----------------------------------------------------------------------+ Parameter Name Value Description --------------- --------------- ---------------------------------------- verbsPorts mlx5_0/1/1 InfiniBand device names and port mlx5_1/1/2 numbers. verbsRdma enable Enables InfiniBand RDMA transfers between Spectrum Scale client nodes and Elastic Storage Server nodes. verbsRdmaSend 1 Enables the use of InfiniBand RDMA for most Spectrum Scale daemon-to-daemon communication. Hyper-Threading disabled Disables the use of two threads per core in the CPU. The setting was changed in the BIOS menus of the client nodes. Hardware Configuration and Tuning Notes --------------------------------------- The first three configuration parameters were set using the "mmchconfig" command on one of the nodes in the cluster. The verbs settings in the table above allow for efficient use of the InfiniBand infrastructure. The settings determine when data are transferred over IP and when they are transferred using the verbs protocol. The InfiniBand traffic went through two switches, item 3 in the Bill of Materials. The last parameter disabled Hyper-Threading on the client nodes. Software Configuration and Tuning - Physical ============================================ +----------------------------------------------------------------------+ | Spectrum Scale Client Nodes | +----------------------------------------------------------------------+ Parameter Name Value Description --------------- --------------- ---------------------------------------- maxFilesToCache 11M Specifies the number of inodes to cache for recently used files that have been closed. maxMBpS 10000 Specifies an estimate of how many megabytes of data can be transferred per second into or out of a single node. maxStatCache 0 Specifies the number of inodes to keep in the stat cache. pagepool 32G Specifies the size of the cache on each node. pagepoolMaxPhys 90 Percentage of physical memory that can MemPct be assigned to the page pool workerThreads 1024 Controls the maximum number of concurrent file operations at any one instant, as well as the degree of concurrency for flushing dirty data and metadata in the background and for prefetching data and metadata. +----------------------------------------------------------------------+ | Elastic Storage Server | +----------------------------------------------------------------------+ Parameter Name Value Description --------------- --------------- ---------------------------------------- nsdRAIDTracks 1M Specifies the number of tracks in the Spectrum Scale Native RAID buffer pool Software Configuration and Tuning Notes --------------------------------------- The configuration parameters were set using the "mmchconfig" command on one of the nodes in the cluster. Both the client nodes and the ESS used mostly default tuning parameters. The parameters listed in the table above reflect values that might be used in a typical streaming environment. Service SLA Notes ----------------- There were no opaque services in use. Storage and Filesystems ======================= Item Stable No Description Data Protection Storage Qty ---- ------------------------------------- ------------------ -------- ----- 1 348 2 TB NLSAS drives in the ESS. Spectrum Scale Yes 1 Native RAID declustered arrays 2 300 GB 10K mirrored HDD pair in RAID-1 No 8 Spectrum Scale client nodes used to store the OS. 3 600 GB 10K SAS mirrored HDD pair in RAID-1 No 2 ESS nodes used to store the OS. Number of Filesystems 1 Total Capacity 128 TiB Filesystem Type Spectrum Scale File System Filesystem Creation Notes ------------------------- A single Spectrum Scale file system was created with a 8 MiB block size for data, a 1 MiB block size for metadata, 4 KiB inode size, and a 128 MiB log size. The file system was spread across all of the Network Shared Disks (NSDs) defined by the ESS. Each client node and ESS node mounted the file system. A policy was applied to the file system that places data and metadata on separate pools as defined by the NSD configuration. The client nodes each had an ext4 file system that hosted the operating system. Storage and Filesystem Notes ---------------------------- The ESS was configured with two declustered arrays each containing 174 2TB NLSAS drives. The arrays were configured to tolerate the failure of any 2 drives or any single DCS3700E enclosure. Two NSDs were created using 2 64 TiB 8+2P vdisks, 1 per declustered array, and were designated to hold Spectrum Scale file system data. Two additional NSDs were created using 2 500 GiB 3-way replicated vdisks, 1 per declustered array, and were designated to hold Spectrum Scale file system metadata. Each vdisk was created within a declustered array and the blocks of the vdisk were spread across all the available physical disks in the array. The cluster used a two-tier architecture. The client nodes perform file-level operations. The data requests are transmitted to the ESS nodes. The ESS nodes perform the block-level operations. In Spectrum Scale terminology the load generators are NSD clients and the ESS nodes are NSD servers. The NSDs were the storage devices specified when creating the Spectrum Scale file system. Transport Configuration - Physical ================================== Item Number of No Transport Type Ports Used Notes ---- --------------- ---------- ----------------------------------------------- 1 1 GbE cluster 10 Each node connects to a 1 GbE administration network network with MTU=1500 2 FDR InfiniBand 28 Client nodes have 2 FDR links, and each ESS cluster network node has 6 FDR links to a shared FDR IB cluster network Transport Configuration Notes ----------------------------- The 1 GbE network was used for administrative purposes. All benchmark traffic flowed through the Mellanox SX6036 InfiniBand switch. Each client node had two active InfiniBand ports. Both the 1 GbE port and the first InfiniBand port were used by Spectrum Scale for inter-node communication. Each client node InfiniBand port was on a separate FDR fabric for RDMA connections between nodes. Switches - Physical =================== Total Used Item Port Port No Switch Name Switch Type Count Count Notes ---- -------------------- --------------- ------ ----- ------------------------ 1 SMC 8150L2 10/100/1000 50 10 The default Gbps Ethernet configuration was used on the switch. 2 Mellanox SX6036 #1 FDR InfiniBand 36 14 The default configuration was used on the switch. 3 Mellanox SX6036 #2 FDR InfiniBand 36 14 The default configuration was used on the switch. Processing Elements - Physical ============================== Item No Qty Type Location Description Processing Function ---- ---- -------- -------------- ------------------------- ------------------- 1 16 CPU Spectrum Scale Intel(R) Xeon(R) CPU Spectrum Scale client nodes E5-2650 v2 @ 2.60GHz client, load 8-core generator, device drivers 2 4 CPU Elastic IBM POWER8(R) 10-Core Spectrum Scale NSD Storage Server 3.42 GHz server, Spectrum Scale Native RAID, device drivers Processing Element Notes ------------------------ Each of the Spectrum Scale client nodes had 2 physical processors. Each processor had 8 cores with one thread per core. Each ESS node had 2 physical processors. Each processor had 10 cores with SMT2 enabled by default. Memory - Physical ================= Size in Number of Description GiB Instances Nonvolatile Total GiB ------------------------- ---------- ---------- ------------ ------------ Spectrum Scale client 128 8 V 1024 node system memory ESS node system memory 232 2 V 464 ESS node integrated NVRAM 4 2 NV 8 module Grand Total Memory Gibibytes 1496 Memory Notes ------------ In the client nodes Spectrum Scale reserves a portion of the physical memory for file data and metadata caching. A portion of the memory is also reserved for buffers used for node to node communication. In the ESS nodes Spectrum Scale reserves a portion of the physical memory for caching block data. The integrated NVRAM module is used to store Fast Write data and some block-level log data. Stable Storage ============== The ESS nodes each have an NVRAM that is used to temporarily store some of the modified data before being written to the backend disk. Modified data designated as "fast writes" are stored initially in the NVRAM, while standard modified data go directly to the backend disk. The data in the NVRAM is mirrored between the nodes. In the case of a single node failure, the write data and any destaging to backend disk is handled by the still active node. In the case of a general power outage a capacitor on the PCI card holds enough charge to keep the card powered long enough for the NVRAM data to be destaged to a stable flash medium on the card. All of the modified writes in the benchmark are handled by the ESS. Solution Under Test Configuration Notes ======================================= The solution under test was a Spectrum Scale cluster optimized for streaming environments. The NSD client nodes were also the load generators for the benchmark. The benchmark was executed from one of the client nodes. All of the Spectrum Scale nodes were connected to a 1 GbE switch and two FDR InfiniBand switches. The Elastic Storage Server consisted of the NSD server nodes and 348 NLSAS drives in 6 disk expansion drawers attached to the nodes via 6 Gbps SAS connections. Each ESS node had a SAS connection to each DCS3700E disk storage enclosure. Each ESS node also included a PCI attached NVRAM card. The data in each NVRAM card was mirrored between the ESS nodes, which communicated with each other over the InfiniBand network. Other Solution Notes ==================== Data protection and integrity features of the ESS were enabled during the benchmark execution. These features include disk scrubbing, NSD checksums and version numbers, double disk failure tolerance, and single storage enclosure fault tolerance. Dataflow ======== The 8 Spectrum Scale client nodes were the load generators for the benchmark. Each load generator had access to the single namespace Spectrum Scale file system. The benchmark accessed a single mount point on each load generator. In turn each of mount points corresponded to a single shared base directory in the file system. The NSD clients process the file operations, and the data requests to and from disk were serviced by the Elastic Storage Server. Other Notes =========== IBM, IBM Spectrum Scale, IBM Elastic Storage, Power, and POWER8 are trademarks of International Business Machines Corp., registered in many jurisdictions worldwide. Intel and Xeon are trademarks of the Intel Corporation in the U.S. and/or other countries. Mellanox is a registered trademark of Mellanox Ltd. Other Report Notes ================== None =============================================================================== Generated on Wed Mar 13 16:53:23 2019 by SpecReport Copyright (C) 2016-2019 Standard Performance Evaluation Corporation