SPEC SFS(R)2014_eda Result SPEC SFS(R) : Reference submission Subcommittee SPEC SFS2014_eda = 100 Job_Sets (Overall Response Time = 0.71 msec) =============================================================================== Performance =========== Business Average Metric Latency Job_Sets Job_Sets (Job_Sets) (msec) Ops/Sec MB/Sec ------------ ------------ ------------ ------------ 10 0.4 4500 80 20 0.4 9000 160 30 0.5 13500 241 40 0.5 18000 321 50 0.6 22501 402 60 0.7 27001 483 70 0.8 31501 563 80 0.9 36001 644 90 1.0 40502 725 100 1.4 45002 805 =============================================================================== Product and Test Information ============================ +---------------------------------------------------------------+ | Reference submission | +---------------------------------------------------------------+ Tested by SPEC SFS(R) Subcommittee Hardware Available 12/2017 Software Available 12/2017 Date Tested 12/2017 License Number 55 Licensee Locations Hopkinton, Massachusetts The SPEC SFS(R) 2014 Reference Solution consists of a Dell PowerEdge R630 - rack-mountable - Xeon E5-2640V4 2.4 GHz - 96 GB - 600 server, based on Intel Xeon E5-2640V4 2.4 GHz - 8 core processor, connected to VMware 24 nodes cluster using the NFSv3 protocol over an Ethernet network. The PowerEdge R630 server, provides IO/s from 8 file systems, and 8 volumes. The PowerEdge R630 accelerates business and increases speed-to-market by providing scalable, high performance storage for mission critical and highly transactional applications. Based on the powerful family of Intel E5-2600 processors, the PowerEdge R630 uses All SSD Flash storage architecture for block, and file, and supports for native NAS, and iSCSI protocols. Each PowerEdge R630 server uses a single socket storage processors, full 12 Gb SAS back end connectivity and includes 12 SAS SSD Dell - solid state drive - 1.6 TB - SAS 12Gb/s. The storage server is running Linux SLES12SP1 #2 SMP and using NFSv3 Linux server. A second PowerEdge R630 is used in an Active-Passive mode as a failover server. The PowerEdge R640 is configured with 12 Dell Enterprise-class 2.5", 1.6TB Solid State Drives, Serial Attached SCSI 12Gb/s drive technology and 4 10GbE Ethernet networking. Solution Under Test Bill of Materials ===================================== Item No Qty Type Vendor Model/Name Description ---- ---- ---------- ---------- ---------- ----------------------------------- 1 2 Storage Dell PowerEdge PowerEdge R630 - rack-mountable - Cluster R630 single Xeon E5-2640V4 2.4 GHz-96GB- Node 4x10GbE 2 4 Load Dell Dual PowerEdge R430 - Dual Xeon Generator Socket E5-2603V3 1.7GHz24GB-2x10GbE Servers PowerEdge R430 Server 3 2 Ethernet Dell PowerConne PowerConnect 8024 24 Port 10Gb Switch ct 8024 Ethernet Switch (10GBASE-T) Configuration Diagrams ====================== 1) sfs2014-20171219-00025.config1.jpg (see SPEC SFS2014 results webpage) Component Software ================== Item Name and No Component Type Version Description ---- ------------ ------------ ------------ ----------------------------------- 1 PowerEdge Linux SLES12SP1 #2 The PowerEdge R630 were running R630 SMP Linux SUSE OS SLES 12 2 VMware ESXi Server 5.1 (VM The 4 PowerEdge R430 were running Hypervisor version 9) VMware ESXi 5.1 Hypervisor and were configured with 6 VM's EA 3 Load Linux CentOS 7.2 The VMware ESXi 5.1 Hypervisor 5.1 Generators 64-bit was configured to run 6 VM's running Linux OS total 24 VM's Hardware Configuration and Tuning - Physical ============================================ +----------------------------------------------------------------------+ | Load Generator Virtual Machine | +----------------------------------------------------------------------+ Parameter Name Value Description --------------- --------------- ---------------------------------------- MTU 9000 Maximum Transfer Unit Hardware Configuration and Tuning Notes --------------------------------------- The Ports' MTU on the Load Generators, Network Switch and Storage Servers were set to Jumbo MTU=9000 Software Configuration and Tuning - Physical ============================================ +----------------------------------------------------------------------+ | n/a | +----------------------------------------------------------------------+ Parameter Name Value Description --------------- --------------- ---------------------------------------- n/a n/a n/a Software Configuration and Tuning Notes --------------------------------------- No software tunings were used - default NFS mount options were used. Service SLA Notes ----------------- No opaque services were in use. Storage and Filesystems ======================= Item Stable No Description Data Protection Storage Qty ---- ------------------------------------- ------------------ -------- ----- 1 PowerEdge R630 server: 1.6TB SAS SSD RAID5 3+1 Yes 12 Drives 2 Virtual Machine: 18GB SAS Drives None Yes 24 Number of Filesystems 8 Total Capacity 8 TB Filesystem Type NFSv3 Filesystem Creation Notes ------------------------- The file system was created on the PowerEdge R630 using all default parameters. Storage and Filesystem Notes ---------------------------- The VM's storage was configured on the ESXi server and shared from a single 600GB SAS 15K RPM HDD. Transport Configuration - Physical ================================== Item Number of No Transport Type Ports Used Notes ---- --------------- ---------- ----------------------------------------------- 1 10 Gbit on 4 4 ports were connected and used for test and 4 Storage Node stdby 2 10 Gbit on Load 8 2 ports were connected on each ESXi server and Generators split into 6 VM's using an internal Private network Transport Configuration Notes ----------------------------- All load generator VM clients were connected to an internal SW switch inside each ESXi server. This internal switch was connected to the 10 GbE switch. Switches - Physical =================== Total Used Item Port Port No Switch Name Switch Type Count Count Notes ---- -------------------- --------------- ------ ----- ------------------------ 1 PowerConnect 8024 10 GbE Ethernet 48 24 The VM's were connected ESXi Servers to to the 10 Gbit switch Storage nodes using a Private network interconnect on the ESXi Processing Elements - Physical ============================== Item No Qty Type Location Description Processing Function ---- ---- -------- -------------- ------------------------- ------------------- 1 1 Xeon CPU Intel Xeon Processor NFSv3 Server E5-2640 E5-2640 v4 with 8 cores v4 2 8 Xeon CPU Intel Xeon Processor Load Generators E5-2600 E5-2600 v4 with 6 cores v4 Processing Element Notes ------------------------ The 4 ESXi servers (on PowerEdge R430) were using the dual socket E5-2600 v4 and the Load Generators VM's were configured with 2 cores each without hyperthreading. Memory - Physical ================= Size in Number of Description GiB Instances Nonvolatile Total GiB ------------------------- ---------- ---------- ------------ ------------ PowerEdge R630 main 96 1 V 96 memory PowerEdge R630 NVRAM 160 1 NV 160 module with Vault-to-SSD Load generator VM memory 4 24 V 96 Grand Total Memory Gibibytes 352 Memory Notes ------------ Each PowerEdge R630 storage controller has main memory that is used for the operating system and for caching filesystem data. It uses a 160GiB partition of one SSD device to provide stable storage for writes that have not yet been written to disk. Stable Storage ============== Each PowerEdge R630 storage node is equipped with a nvram journal that stores writes to the local SSD disks. The nvram mirror data to a partition of SSD flash device in the event of power-loss. Solution Under Test Configuration Notes ======================================= The system under test consisted of 2 PowerEdge R630 storage nodes, 1U each, configured as Active-StdBy, connected by 4 10 GbE ports of a 4 ports NIC. Each storage node was configured with 4 10GbE network interfaces connected to a 10GbE switch. There were 24 load generating clients, each connected to the same PowerConnect 8024 Ethernet switch as the PowerEdge R630 storage nodes. Other Solution Notes ==================== None Dataflow ======== Each load generating client mounted all the 8 file systems using NFSv3. Because there is a single active storage node, all the clients mounted all 8 file systems from the storage node. The order of the clients as used by the benchmark was round-robin distributed such that as the load scaled up, each additional process used the next file system. This ensured an even distribution of load over the network and among the 8 file systems configured on the storage node. Other Notes =========== None Other Report Notes ================== None =============================================================================== Generated on Wed Mar 13 16:56:56 2019 by SpecReport Copyright (C) 2016-2019 Standard Performance Evaluation Corporation