SPEC SFS®2014_swbuild Result

Copyright © 2016-2019 Standard Performance Evaluation Corporation

Huawei SPEC SFS2014_swbuild = 200 Builds
Huawei OceanStor 5500 V5 Overall Response Time = 0.58 msec


Performance

Business
Metric
(Builds)
Average
Latency
(msec)
Builds
Ops/Sec
Builds
MB/Sec
200.21110000129
400.17620000258
600.20630001388
800.28140001517
1000.56250002646
1200.52560002776
1400.70670002905
1600.977799991035
1800.604900031165
2002.101999941294
Performance Graph


Product and Test Information

Huawei OceanStor 5500 V5
Tested byHuawei
Hardware Available04/2018
Software Available04/2018
Date Tested07/2018
License Number3175
Licensee LocationsChengdu, China

Huawei's OceanStor 5500 V5 Storage System is the new generation of mid-range hybrid flash storage, dedicated to providing the reliable and efficient data services for enterprises. Cloud-ready operating system, flash-enabled performance, and intelligent management software, delivering top-of-the-line functionality, performance, efficiency, reliability, and ease of use. Satisfies the data storage requirements of large-database OLTP/OLAP, cloud computing, and many other applications, making it a perfect choice for sectors such as government, finance, telecommunications, and manufacturing.

Solution Under Test Bill of Materials

Item NoQtyTypeVendorModel/NameDescription
11Storage ArrayHuaweiOceanStor 5500 V5 System(Two Active-Active Controller)A single Huawei OceanStor 5500 V5 engine includes 2 controllers. Huawei OceanStor 5500 V5 is 2-controller full redundancy. Each controller includes 128GiB memory and 1 4-port 10GbE Smart I/O Modules, 4 ports used for data (connections to load generators). Each controller includes 2 2-port onboard SAS port. Included Premium Bundle which includes NFS, CIFS, NDMP, SmartQuota, HyperClone, HyperSnap, HyperReplication, HyperMetro, SmartQoS, SmartPartition, SmartDedupe, SmartCompression, Only NFS protocol license is used in the test.
224Disk driveHuaweiSSDM-900G2S-02900GB SSD SAS Disk Unit(2.5"), all the 24 SSD disks are in the engine.
3410GbE HBA cardIntelIntel Corporation 82599ES 10-Gigabit SFI/SFP+Used in client for data connection to storage,each client used 2 10GbE cards,and each card with 2 ports.
42ClientHuaweiHuawei FusionServer RH2288 V3 serversHuawei server, each with 128GiB main memory. 1 used as Prime Client; 2 used to generate the workload including Prime Client.

Configuration Diagrams

  1. Huawei OceanStor 5500 V5 Config Diagram

Component Software

Item NoComponentTypeName and VersionDescription
1LinuxOSSUSE Linux Enterprise Server 12 SP3 with the kernel 4.4.73-5-defaultOS for the 2 clients
2OceanStorStorage OSV500R007Storage Operating System

Hardware Configuration and Tuning - Physical

Client
Parameter NameValueDescription
NoneNoneNone

Hardware Configuration and Tuning Notes

None

Software Configuration and Tuning - Physical

Clients
Parameter NameValueDescription
rsize,wsize1048576NFS mount options for data block size
protocoltcpNFS mount options for protocol
nfsvers3NFS mount options for NFS version
tcp_fin_timeout600TCP time to wait for final packet before socket closed
somaxconn65536Max tcp backlog an application can request
tcp_fin_timeout5TCP time to wait for final packet before socket closed
tcp_slot_table_entries256number of simultaneous TCP Remote Procedure Call (RPC) requests
tcp_rmem10000000 20000000 40000000receive buffer size, min, default, max
tcp_wmem10000000 20000000 40000000send buffer size; min, default, max
netdev_max_backlog300000max number of packets allowed to queue

Software Configuration and Tuning Notes

Used the mount command "mount -t nfs -o nfsvers=3 31.31.31.1:/fs_1 /mnt/fs_1" in the test. The mount information is 31.31.31.1:/fs_1 on /mnt/fs_1 type nfs (rw,relatime,vers=3,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=31.31.31.1,mountvers=3,mountport=2050,mountproto=udp,local_lock=none,addr=31.31.31.1).

Service SLA Notes

None

Storage and Filesystems

Item NoDescriptionData ProtectionStable StorageQty
1900GB SSD Drives used for data; 1x 24 RAID5-9 groups including 4 coffer disk;RAID-5Yes1
22 64GB 7200 RPM SATA used for system data for the engineRAID-1Yes2
Number of Filesystems8
Total Capacity8192GiB
Filesystem Typethin

Filesystem Creation Notes

The file system block size was 8KB.

Storage and Filesystem Notes

Used one engine of OceanStor 5500 V5 in the test. And one engine included two controllers. The engine had 25 disk slot and there are 24 SSD disks in the enclosure for the test. All the 24 disks were created to be a storage pool with RAID5-9. In the storage pool 8 filesystems were created, and each controller included 4 filesystems. The RAID5-9 was 8+1. The RAID5-9 was on the each stripe and all the stripes were distributed in the all the 24 drives by specifical algorithm. For example, stripe 1 was RAID5-9 from disk1 to disk8. And stripe 2 was from disk2 to disk9. All the stripes were just like this.

Transport Configuration - Physical

Item NoTransport TypeNumber of Ports UsedNotes
110GbE8For the client-to-storage network, client connected the storage directly. No switch was used.There were 8 10GbE connections totally,communicating with NFSv3 over TCP/IP to 8 clients.

Transport Configuration Notes

Each controller used 1 10GbE card and each 10GbE card included 4 port. Totally 8 10GbE port were used for each controller for data transport connectivity to clients. Totally 8 ports for the 2 clients and 8 ports for the 2 storage controllers were used and the clients were connected to the storage directly. The 2 controller interconnect used PCIe to be HA pairs.

Switches - Physical

Item NoSwitch NameSwitch TypeTotal Port CountUsed Port CountNotes
1NoneNoneNoneNoneNone

Processing Elements - Physical

Item NoQtyTypeLocationDescriptionProcessing Function
12CPUStorage ControllerIntel(R) Xeon(R) Gold 4109T @ 2.0GHz, 8 coreNFS, TCP/IP, RAID and Storage Controller functions
24CPUClientIntel(R) Xeon(R) CPU E5-2670 v3 @ 2.30GHzNFS Client, SUSE Linux Enterprise Server 12 SP3

Processing Element Notes

Each OceanStor 5500 V5 Storage Controller contains 1 Intel(R) Xeon(R) Gold 4109T @ 2.0GHz processor. Each client contains 2 Intel(R) Xeon(R) CPU E5-2670 v3 @ 2.30GHz processor.

Memory - Physical

DescriptionSize in GiBNumber of InstancesNonvolatileTotal GiB
Main Memory for each OceanStor 5500 V5 Storage Controller1282V256
Memory for each client1282V256
Grand Total Memory Gibibytes512

Memory Notes

Main memory in each storage controller was used for the operating system and caching filesystem data including the read and write cache.

Stable Storage

1.There are three ways to protect data. For the disk failure, OceanStor 5500 V5 Storage uses RAID to protect data. For the controller failure, OceanStor 5500 V5 Storage uses cache mirror which data also wrote to the other controller's cache. And for power failure, there are BBUs to supply power for the storage to flush the cache data to disks. 2.No persistent memory were used in the storage, The BBUs could supply the power for the failure recovery and the 128 GiB memory for each controller included the mirror cache. The data was mirrored between the two controllers. 3.The write cache was less than 64GB, so the 64GB of SATA drive could cover the user write data.

Solution Under Test Configuration Notes

None

Other Solution Notes

None

Dataflow

Please reference the configuration diagram. 2 clients were used to generate the workload and 1 client acted as Prime Client to control the other clients. Each client had 4 ports and two ports connected to one controller. Totally there were 8 ports and 8 filesystem. And each port mounted one filesystem.

Other Notes

There were no Spectre/Meltdown patches applied to any component in the Solution Under Test.

Other Report Notes

None


Generated on Wed Mar 13 17:00:03 2019 by SpecReport
Copyright © 2016-2019 Standard Performance Evaluation Corporation