SPEC SFS®2014_vdi Result

Copyright © 2016-2019 Standard Performance Evaluation Corporation

SPEC SFS(R) Subcommittee SPEC SFS2014_vdi = 100 Desktops
Reference submission Overall Response Time = 0.71 msec


Performance

Business
Metric
(Desktops)
Average
Latency
(msec)
Desktops
Ops/Sec
Desktops
MB/Sec
100.539200029
200.546400058
300.581600087
400.6278000116
500.66710000146
600.71312000175
700.78514000204
800.80716000234
900.88518000263
1000.99920000292
Performance Graph


Product and Test Information

Reference submission
Tested bySPEC SFS(R) Subcommittee
Hardware Available12/2017
Software Available12/2017
Date Tested12/2017
License Number55
Licensee LocationsHopkinton, Massachusetts

The SPEC SFS(R) 2014 Reference Solution consists of a Dell PowerEdge R630 - rack-mountable - Xeon E5-2640V4 2.4 GHz - 96 GB - 600 server, based on Intel Xeon E5-2640V4 2.4 GHz - 8 core processor, connected to VMware 24 nodes cluster using the NFSv3 protocol over an Ethernet network.

The PowerEdge R630 server, provides IO/s from 8 file systems, and 8 volumes. The PowerEdge R630 accelerates business and increases speed-to-market by providing scalable, high performance storage for mission critical and highly transactional applications. Based on the powerful family of Intel E5-2600 processors, the PowerEdge R630 uses All SSD Flash storage architecture for block, and file, and supports for native NAS, and iSCSI protocols. Each PowerEdge R630 server uses a single socket storage processors, full 12 Gb SAS back end connectivity and includes 12 SAS SSD Dell - solid state drive - 1.6 TB - SAS 12Gb/s. The storage server is running Linux SLES12SP1 #2 SMP and using NFSv3 Linux server. A second PowerEdge R630 is used in an Active-Passive mode as a failover server. The PowerEdge R640 is configured with 12 Dell Enterprise-class 2.5", 1.6TB Solid State Drives, Serial Attached SCSI 12Gb/s drive technology and 4 10GbE Ethernet networking.

Solution Under Test Bill of Materials

Item NoQtyTypeVendorModel/NameDescription
12Storage Cluster NodeDellPowerEdge R630PowerEdge R630 - rack-mountable - single Xeon E5-2640V4 2.4 GHz-96GB-4x10GbE
24Load Generator ServersDellDual Socket PowerEdge R430 ServerPowerEdge R430 - Dual Xeon E5-2603V3 1.7GHz24GB-2x10GbE
32Ethernet SwitchDellPowerConnect 8024PowerConnect 8024 24 Port 10Gb Ethernet Switch (10GBASE-T)

Configuration Diagrams

  1. Diagram

Component Software

Item NoComponentTypeName and VersionDescription
1PowerEdge R630LinuxSLES12SP1 #2 SMPThe PowerEdge R630 were running Linux SUSE OS SLES 12
2VMware HypervisorESXi Server5.1 (VM version 9)The 4 PowerEdge R430 were running VMware ESXi 5.1 Hypervisor and were configured with 6 VM's EA
3Load GeneratorsLinuxCentOS 7.2 64-bitThe VMware ESXi 5.1 Hypervisor 5.1 was configured to run 6 VM's running Linux OS total 24 VM's

Hardware Configuration and Tuning - Physical

Load Generator Virtual Machine
Parameter NameValueDescription
MTU9000Maximum Transfer Unit

Hardware Configuration and Tuning Notes

The Ports' MTU on the Load Generators, Network Switch and Storage Servers were set to Jumbo MTU=9000

Software Configuration and Tuning - Physical

n/a
Parameter NameValueDescription
n/an/an/a

Software Configuration and Tuning Notes

No software tunings were used - default NFS mount options were used.

Service SLA Notes

No opaque services were in use.

Storage and Filesystems

Item NoDescriptionData ProtectionStable StorageQty
1PowerEdge R630 server: 1.6TB SAS SSD DrivesRAID5 3+1Yes12
2Virtual Machine: 18GB SAS DrivesNoneYes24
Number of Filesystems8
Total Capacity8 TB
Filesystem TypeNFSv3

Filesystem Creation Notes

The file system was created on the PowerEdge R630 using all default parameters.

Storage and Filesystem Notes

The VM's storage was configured on the ESXi server and shared from a single 600GB SAS 15K RPM HDD.

Transport Configuration - Physical

Item NoTransport TypeNumber of Ports UsedNotes
110 Gbit on Storage Node44 ports were connected and used for test and 4 stdby
210 Gbit on Load Generators82 ports were connected on each ESXi server and split into 6 VM's using an internal Private network

Transport Configuration Notes

All load generator VM clients were connected to an internal SW switch inside each ESXi server. This internal switch was connected to the 10 GbE switch.

Switches - Physical

Item NoSwitch NameSwitch TypeTotal Port CountUsed Port CountNotes
1PowerConnect 802410 GbE Ethernet ESXi Servers to Storage nodes interconnect4824The VM's were connected to the 10 Gbit switch using a Private network on the ESXi

Processing Elements - Physical

Item NoQtyTypeLocationDescriptionProcessing Function
11Xeon E5-2640 v4CPUIntel Xeon Processor E5-2640 v4 with 8 coresNFSv3 Server
28Xeon E5-2600 v4CPUIntel Xeon Processor E5-2600 v4 with 6 coresLoad Generators

Processing Element Notes

The 4 ESXi servers (on PowerEdge R430) were using the dual socket E5-2600 v4 and the Load Generators VM's were configured with 2 cores each without hyperthreading.

Memory - Physical

DescriptionSize in GiBNumber of InstancesNonvolatileTotal GiB
PowerEdge R630 main memory961V96
PowerEdge R630 NVRAM module with Vault-to-SSD1601NV160
Load generator VM memory424V96
Grand Total Memory Gibibytes352

Memory Notes

Each PowerEdge R630 storage controller has main memory that is used for the operating system and for caching filesystem data. It uses a 160GiB partition of one SSD device to provide stable storage for writes that have not yet been written to disk.

Stable Storage

Each PowerEdge R630 storage node is equipped with a nvram journal that stores writes to the local SSD disks. The nvram mirror data to a partition of SSD flash device in the event of power-loss.

Solution Under Test Configuration Notes

The system under test consisted of 2 PowerEdge R630 storage nodes, 1U each, configured as Active-StdBy, connected by 4 10 GbE ports of a 4 ports NIC. Each storage node was configured with 4 10GbE network interfaces connected to a 10GbE switch. There were 24 load generating clients, each connected to the same PowerConnect 8024 Ethernet switch as the PowerEdge R630 storage nodes.

Other Solution Notes

None

Dataflow

Each load generating client mounted all the 8 file systems using NFSv3. Because there is a single active storage node, all the clients mounted all 8 file systems from the storage node. The order of the clients as used by the benchmark was round-robin distributed such that as the load scaled up, each additional process used the next file system. This ensured an even distribution of load over the network and among the 8 file systems configured on the storage node.

Other Notes

None

Other Report Notes

None


Generated on Wed Mar 13 16:56:56 2019 by SpecReport
Copyright © 2016-2019 Standard Performance Evaluation Corporation