SPEC SFS®2014_swbuild Result

Copyright © 2016-2019 Standard Performance Evaluation Corporation

WekaIO SPEC SFS2014_swbuild = 5700 Builds
WekaIO Matrix 3.1.8.5 with Supermicro BigTwin Servers Overall Response Time = 0.26 msec


Performance

Business
Metric
(Builds)
Average
Latency
(msec)
Builds
Ops/Sec
Builds
MB/Sec
5700.2132850123687
11400.1975700247374
17100.18185503611063
22800.178114004814750
28500.215142503818436
34200.213171007422125
39900.252199508625811
45600.280228009829497
51300.375256507733187
57000.629284989836872
Performance Graph


Product and Test Information

WekaIO Matrix 3.1.8.5 with Supermicro BigTwin Servers
Tested byWekaIO
Hardware AvailableJuly 2017
Software AvailableNovember 2018
Date TestedDecember 2018
License Number4553
Licensee LocationsSan Jose, California

WekaIO Matrix is a flash native parallel and distributed, scale out file system designed to solve the challenges of the most demanding workloads, including AI and machine learning, genomic sequencing, real-time analytics, media rendering, EDA, software development and technical compute. Matrix software is a POSIX compliant parallel file system that delivers industry leading performance and scale at a fraction of traditional storage products price. The software can support billions of files and scales to hundreds of petabytes in a single namespace. Matrix can be deployed on commodity servers as a dedicated storage appliance or in a hyperconverged mode with zero additional storage footprint. The same software runs on-premises and in the public cloud. WekaIO Matrix is a software only solution that runs on any standard X86 hardware infrastructure delivering huge savings compared to proprietary all-flash based appliances. This test platform is deployed as a dedicated storage implementation on Supermicro BigTwin servers.

Solution Under Test Bill of Materials

Item NoQtyTypeVendorModel/NameDescription
11Parallel File SystemWekaIOMatrix Software V3.1.8.5WekaIO Matrix is a parallel and distributed POSIX file system that scales across compute nodes and distributes data and metadata evenly across the nodes for parallel access.
26Storage Server ChassisSupermicroSYS-2029BT-HNRSupermicro BigTwin chassis, each with 4 nodes per 2U chassis, populated with 6 NVMe drives per node. A total of 23 nodes were used in the testing.
31383.84TB U.2 NVMe SSDMicronMTFDHAL3T8TCT1ARMicron 9200 Pro U.2 NVMe Enterprise Class Drives.
446ProcessorIntelSR3B3Intel Xeon Gold 6126 12 Cores 2.6GHz Processor.
523Network Interface CardMellanoxMCX456A-ECAT100Gbit ConnectX-4 Ethernet dual port PCI-E adapters, one per node.
6276DIMMSupermicroDIMM 16GB 2667MHz 2Rx8 ECCSystem Memory DDR4 2667MHz ECC.
723Boot DriveMicronMTFDDAV240TCB1ARMicron Pro 5100 SATA M.2, 240GB.
823Network Interface CardSupermicroAOC-MHIBE-M1CGM-OSIOM Single Port InfiniBand EDR QSFP28 VPI running in Ethernet mode.
923BIOS ModuleSupermicroSFT-OOB-LICOut of Band Firmware Management BIOS-Flash.
1010SwitchMellanoxMSN2700-CS2FC32-port 100GbE Switch.
115ClientsSupermicroSYS-2029BT-HNRClients are built-to-order from Supermicro. The base build is a BigTwin SYS-2029BT-HNR 2U/4-node chassis with X11DPT-B motherboards. Each node in the SYS-2020BT-HNR represents one client. The built-to-order components in each client includes 2 Intel(R) Xeon(R) Gold 6126 12-core CPUs, 24 DDR4-2666 16GB ECC RDIMMs, 1 100GbE connection to the switch fabric via 1 Mellanox ConnectX-4 PCIe Ethernet adapter, 1 AOC-MHIBE-M1CGM-O SIOM Single Port InfiniBand EDR QSFP28 VPI that is not used/connected. Out of the 20 clients, 1 was used as prime and 19 were used to generate the workload.

Configuration Diagrams

  1. Solution Under Test

Component Software

Item NoComponentTypeName and VersionDescription
1Storage NodeMatrixFS File System3.1.8.5WekaIO Matrix is a distributed and parallel POSIX file system that runs on any NVMe, SAS or SATA enabled commodity server or cloud compute instance and forms a single storage cluster. The file system presents a POSIX compliant, high performance, scalable global namespace to the applications.
2Storage NodeOperating SystemCentOS 7.4The operating system on each storage node was 64-bit CENTOS Version 7.4.
3ClientOperating SystemCentOS 7.4The operating system on the load generator client was 64-bit CENTOS Version 7.4.
4ClientMatrixFS Client3.1.8.5MatrixFS Client software is mounted on the load generator clients and presents a POSIX compliant file system.

Hardware Configuration and Tuning - Physical

Storage Node
Parameter NameValueDescription
SR-IOVEnabledEnables CPU virtualization technology
HyperThreadingDisabledHyperThreading

Hardware Configuration and Tuning Notes

None

Software Configuration and Tuning - Physical

Storage Node
Parameter NameValueDescription
Jumbo Frames4190Enables up to 4190 bytes of Ethernet frames
Client
Parameter NameValueDescription
WriteAmplificationOptimizationLevel0WekaIO MatrixFS install setting, Write Amplification Optimization level
MAX_OPEN_FILES66MWekaIO MatrixFS client install time parameter setting, Maximum number of open files
nofile500000Client side Linux kernel /etc/security/limits.conf nofile setting
MTU4190Client OS NIC setting, MTU

Software Configuration and Tuning Notes

The MTU is set to 4190 and is required and valid for all environments and workloads.

Service SLA Notes

Not applicable.

Storage and Filesystems

Item NoDescriptionData ProtectionStable StorageQty
13.84TB U.2 Micron 9200 Pro NVMe SSD in the Supermicro BigTwin chassis16+2Yes138
2240GB M.2 Micron 5100 SATA SSD in the Supermicro BigTwin to store and boot OSNoneYes23
Number of Filesystems1
Total Capacity342.73 TiB
Filesystem TypeMatrixFS

Filesystem Creation Notes

A single WekaIO Matrix file system was created and distributed evenly across all 138 NVMe drives in the cluster (23 storage nodes x 6 drives/node). Data was protected to a 16+2 failure level. The file system overprovisions an additional 20% of capacity for performance quality of service at high water mark.

Storage and Filesystem Notes

WekaIO MatrixFS was created and distributed evenly across all 23 storage nodes in the cluster. The deployment model is as a dedicated server protected with Matrix Distributed Data Coding schema of 16+2. All data and metadata is distributed evenly across the 23 storage nodes.

Transport Configuration - Physical

Item NoTransport TypeNumber of Ports UsedNotes
1100GbE NIC46The solution used a total of 46 100GbE ports from the storage nodes to the network switch.
2100GbE NIC19The solution used a total of 19 100GbE ports from the clients to the network switch.
3100GbE NIC1The solution used a total of 1 100GbE ports from the prime to the network switch.

Transport Configuration Notes

The solution under test had a total of 320 100GbE ports from the 10 Mellanox MSN 2700 switches. The switches were configured in a leaf-spine topology where 4 were spines and 6 were leaves. Each spine switch utilized 24 ports from the connections to the leaf switches. At the leaf switches, the storage nodes consumed a total of 46 100GbE ports, while the clients and prime utilized 20 100GbE ports. The leaf-to-spine connections consumed a total of 96 100GbE ports. Combined, the storage nodes and clients utilized a total of 162 100GbE ports.

Switches - Physical

Item NoSwitch NameSwitch TypeTotal Port CountUsed Port CountNotes
1Qty. 10, Mellanox MSN 2700100Gb Ethernet320162Switches have Jumbo Frames enabled with MTU set to 4190

Processing Elements - Physical

Item NoQtyTypeLocationDescriptionProcessing Function
146CPUSYS-2029BT-HNRIntel(R) Xeon(R) Gold 6126, 12 Cores, 2.6GHz CPUWekaIO MatrixFS, Data Protection, device driver
238CPUSYS-2029BT-HNRIntel(R) Xeon(R) Gold 6126, 12 Cores, 2.6GHz CPUWekaIO MatrixFS client
32CPUSYS-2029BT-HNRIntel(R) Xeon(R) Gold 6126, 12 Cores, 2.6GHz CPUSPEC SFS2014 Prime

Processing Element Notes

Each storage node has 2 processors, each processor has 12 cores at 2.6Ghz. Each client has 2 processors, each processor has 12 cores. WekaIO Matrix utilized 3 of the 24 available cores on the client to run Matrix functions.

Memory - Physical

DescriptionSize in GiBNumber of InstancesNonvolatileTotal GiB
Storage node memory19223V4416
Client memory38419V7296
Prime memory3841V384
Grand Total Memory Gibibytes12096

Memory Notes

Each storage node has 192GBytes of memory for a total of 4,416GBytes. Each client has 384GBytes of memory for a total of 7,296GBytes. The prime has 384GBytes of memory for a total of 384GBytes.

Stable Storage

WekaIO does not use any internal memory to temporarily cache write data to the underlying storage system. All writes are committed directly to the storage disk, therefore there is no need for any RAM battery protection. Data is protected on the storage media using WekaIO Matrix Distributed Data Protection (16+2). In the event of a power failure a write in transit would not be acknowledged.

Solution Under Test Configuration Notes

The solution under test was a standard WekaIO Matrix enabled cluster in dedicated server mode. The solution will handle both large file I/O as well as small file random I/O and metadata intensive applications. No specialized tuning is required for different or mixed use workloads. None of the components used to perform the test were patched with Spectre or Meltdown patches (CVE-2017-5754,CVE-2017-5753,CVE-2017-5715).

Other Solution Notes

None.

Dataflow

5 x SYS-2029BT-HNR storage Chassis (19 clients) were used to generate the benchmark workload. Each client had 1 x 100GbE network connection to a Mellanox MSN 2700 switch. 6 x Supermicro BigTwin 2029BT-HNR storage chassis (23 nodes) were benchmarked. Each storage node had 2 x 100GbE network connection to a Mellanox MSN 2700 switch. The clients (Supermicro) had the MatrixFS native NVMe POSIX Client mounted and had direct and parallel access to all 23 storage nodes.

Other Notes

None

Other Report Notes

None


Generated on Wed Mar 13 16:18:51 2019 by SpecReport
Copyright © 2016-2019 Standard Performance Evaluation Corporation