SPECstorage™ Solution 2020_ai_image Result

Copyright © 2016-2020 Standard Performance Evaluation Corporation

SPEC Storage(TM) Subcommittee SPECstorage Solution 2020_ai_image = 12 AI_Jobs
Reference Submission Overall Response Time = 6.77 msec


Performance

Business
Metric
(AI_Jobs)
Average
Latency
(msec)
AI_Jobs
Ops/Sec
AI_Jobs
MB/Sec
14.30743594
23.044870188
33.3301305282
43.0841740377
53.5502175471
63.5702610565
74.0743045660
83.9743480754
910.5213915848
1011.2094345942
1117.98747841036
1215.79252191130
Performance Graph


Product and Test Information

Reference Submission
Tested bySPEC Storage(TM) Subcommittee
Hardware Available12/2020
Software Available12/2020
Date Tested12/2020
License Number0
Licensee LocationsNewton, Massachusetts

The SPEC Storage(TM) Solution 2020 Reference Solution consists of an FreeNAS - Xeon - 256 GB, based on Xeon - 20 core processor, connected to VMware 8 nodes cluster using the NFSv3 protocol over a 10GbE Ethernet network.

The FreeNAS server, provides IO/s from 2 file systems, and 2 volumes. The FreeNAS is running in iozone.org lab. The FreeNAS server uses a dual socket storage processor, full 12 Gb SAS back end connectivity and includes 40 SAS 10020 RPM Disk Drives 8 TB - SAS 12Gb/s. The storage server is running FreeOS 11.3-RELEASE-p14 using NFSv3 server and 2 10GbE Ethernet networking.

Solution Under Test Bill of Materials

Item NoQtyTypeVendorModel/NameDescription
11NFSv3 freenasFreeNASFreeNAS1: 200GB SSD device for logging; 2: 2x200 GB SSD devices (ARC cache); 3: 2xSAS controllers; 4: 40x450GB SAS 12GBps, 10k RPM HDD
21Load Generator Server1GenericDual Socket AMD ServerGeneric Server - Dual Quad core Opteron 3GHz-32GB-2x10GbE
31Load Generator Server2GenericDual Socket Xeon ServerGeneric Server - Dual Xeon E5-2603V3 1.7GHz32GB-2x10GbE
41Ethernet SwitchQuantaQuanta24 Port 10GbE Ethernet Switch

Configuration Diagrams

  1. FreeNAS Diagram

Component Software

Item NoComponentTypeName and VersionDescription
1FreeNASFreeOS11.3-RELEASE-p14Software running on the FreeNAS hardware and external Acme disk enclosure
2VMware HypervisorESXi Server5.5 (VM version 9)The 2 ESX servers were running VMware ESXi 5.5 Hypervisor and were configured with 4 VM's EA
3Load GeneratorsLinuxCentOS 7.2 64-bitEach of the 2 VMware ESXi 5.5 Hypervisor was configured to run 4 VM's running Linux OS total 8 VM's

Hardware Configuration and Tuning - Virtual

Load Generator Virtual Machines
Parameter NameValueDescription
MTU1500Maximum Transfer Unit

Hardware Configuration and Tuning Notes

The Ports' MTU on the Load Generators, Network Switch and Storage Servers were set to default MTU=1500

Software Configuration and Tuning - Virtual

N/A
Parameter NameValueDescription
N/AN/AN/A

Software Configuration and Tuning Notes

No software tunings were used - default NFS mount options were used.

Service SLA Notes

No opaque services were in use.

Storage and Filesystems

Item NoDescriptionData ProtectionStable StorageQty
1FreeNAS server: 10kRPM 450GB SAS DrivesRAID1Yes40
2Virtual Machine: 16GB SAS Drives for OSNoneYes8
Number of Filesystems2
Total Capacity8 TiB
Filesystem TypeNFSv3

Filesystem Creation Notes

The file systems were created on the FreeNAS using all default parameters.

Storage and Filesystem Notes

The VM's storage was configured on the ESXi servers and shared from a single 450GB SAS 10K RPM HDD.

Every two drives were mirrored (across enclosures) then aggregated into a pool. The two filesystems came from this pool.

Transport Configuration - Physical

Item NoTransport TypeNumber of Ports UsedNotes
110 Gbit on Storage Node22 ports were connected and used for the test
210 Gbit on Load Generators42 ports were connected on each ESXi server and split into 8 VM's using an internal Private network

Transport Configuration Notes

All the load generator VM clients were connected to an internal SW switch inside each ESXi server. This internal switch was connected to the 10 GbE switch.

Switches - Physical

Item NoSwitch NameSwitch TypeTotal Port CountUsed Port CountNotes
1Quanta LB6M10 GbE Ethernet ESXi Servers to Storage nodes interconnect246The VM's were connected to the 10 Gbit switch using a Private network on the ESXi

Processing Elements - Physical

Item NoQtyTypeLocationDescriptionProcessing Function
12XeonCPUXeon Processor v4 with 20 coresNFSv3 Server
28vCPUCPUDual AMD Opteron v4 each with 4 coresLoad Generators

Processing Element Notes

The 2 ESXi servers were using the dual socket Xeon and AMD Opteron processors and the Load Generators VM's were configured with 2 cores each without hyperthreading.

Memory - Physical

DescriptionSize in GiBNumber of InstancesNonvolatileTotal GiB
FreeNAS main memory2561V256
NVRAM module161NV16
Load generator VM memory68V48
Grand Total Memory Gibibytes320

Memory Notes

The FreeNAS storage controller has main memory that is used for the operating system and for caching filesystem data. It uses a 241 GiB partition to provide stable storage for writes that have not yet been written to disk.

Stable Storage

The FreeNAS storage controller has main memory that is used for the operating system and for caching filesystem data. It uses a 241 GiB partition to provide stable storage for writes that have not yet been written to disk.

Solution Under Test Configuration Notes

The system under test consisted of one FreeNAS storage node, connected by 2 10 GbE ports of a dual port NIC. Each storage node was configured with 2 10GbE network interfaces connected to a 10GbE switch. There were 8 load generating clients, each connected to the same Quanta Ethernet switch as the FreeNAS storage node.

Other Solution Notes

None

Dataflow

Each load generating client mounted all the 2 file systems using NFSv3. All the clients mounted 2 file systems from the storage node. The order of the clients as used by the benchmark was round-robin distributed such that as the load scaled up, each additional process used the next file system. This ensured an even distribution of load over the network and among the 2 file systems configured on the storage node.

Other Notes

None.

Other Report Notes

None.


Generated on Wed Dec 16 14:09:30 2020 by SpecReport
Copyright © 2016-2020 Standard Performance Evaluation Corporation