SPEC SFS(R)2014_swbuild Result Huawei : Huawei OceanStor 6800F V5 SPEC SFS2014_swbuild = 1000 Builds (Overall Response Time = 0.59 msec) =============================================================================== Performance =========== Business Average Metric Latency Builds Builds (Builds) (msec) Ops/Sec MB/Sec ------------ ------------ ------------ ------------ 100 0.2 50002 646 200 0.1 100004 1293 300 0.2 150006 1941 400 0.6 200009 2586 500 0.4 250008 3235 600 0.6 300012 3881 700 0.4 350015 4528 800 0.7 400015 5175 900 1.3 450011 5822 1000 2.1 500009 6469 =============================================================================== Product and Test Information ============================ +---------------------------------------------------------------+ | Huawei OceanStor 6800F V5 | +---------------------------------------------------------------+ Tested by Huawei Hardware Available 04/2018 Software Available 04/2018 Date Tested 04/2018 License Number 3175 Licensee Locations Chengdu, China Huawei's OceanStor 6800F V5 Storage System is the new generation of mission-critical all-flash storage, dedicated to providing the highest level of data services for enterprises' mission-critical business. Flexible scalability, flash-enabled performance, and hybrid-cloud-ready architecture, provide the optimal data services for enterprises along with simple and agile management. Solution Under Test Bill of Materials ===================================== Item No Qty Type Vendor Model/Name Description ---- ---- ---------- ---------- ---------- ----------------------------------- 1 1 Storage Huawei OceanStor A single Huawei OceanStor 6800F V5 Array 6800F V5 engine includes 4 controllers. All Flash Huawei OceanStor 6800F V5 is System(Fou 4-controller full redundancy. r Active- Each controller includes 1TB memory Active Con and 2 4-port 10GbE Smart I/O troller) Modules, 8 ports used for data (connections to load generators). The 6800F V5 engine includes 2 12-port SAS IO Modules, 4 ports for one SAS IO Modules used to connect to disk enclosure. Included Premium Bundle which includes NFS, CIFS, NDMP, SmartQuota, HyperClone, HyperSnap, HyperReplication, HyperMetro, SmartQoS, SmartPartition, SmartDedupe, SmartCompression, Only NFS protocol license is used in the test. 2 60 Disk drive Huawei SSDM- 900GB SSD SAS Disk Unit(2.5"), each 900G2S-02 disk enclosure used 15 SSD disks in the test. 3 4 Disk Huawei 2U SAS 2U, AC\240HVDC, 2.5", Expanding Enclosure disk Module, 25 Disk Slots. The disks enclosure were in the disk enclosures and the enclosures were connected to the storage controller directly. 4 16 10GbE HBA Intel Intel Corp Used in client for data connection card oration to storage,each client used 2 10GbE 82599ES cards,and each card with 2 ports. 10-Gigabit SFI/SFP+ 5 8 Client Huawei Huawei Fus Huawei server, each with 128GB main ionServer memory. 1 used as Prime Client; 8 RH2288 V3 used to generate the workload servers including Prime Client. Configuration Diagrams ====================== 1) sfs2014-20180421-00039.config1.jpg (see SPEC SFS2014 results webpage) Component Software ================== Item Name and No Component Type Version Description ---- ------------ ------------ ------------ ----------------------------------- 1 Linux OS Suse12 SP3 OS for the 8 clients for x86_64SUSE Linux Enterprise Server 12 SP3 (x86_64) with the kernel 4.4.7 3-5-default 2 OceanStor Storage OS V500R007 Storage Operating System Hardware Configuration and Tuning - Physical ============================================ +----------------------------------------------------------------------+ | Client | +----------------------------------------------------------------------+ Parameter Name Value Description --------------- --------------- ---------------------------------------- MTU 9000 Jumbo Frames configured for 10Gb ports Hardware Configuration and Tuning Notes --------------------------------------- Client 10Gb ports used for connection to storage controller. And just the clients were configured for Jumbo frames. The storage used the default MTU(1500). Software Configuration and Tuning - Physical ============================================ +----------------------------------------------------------------------+ | Clients | +----------------------------------------------------------------------+ Parameter Name Value Description --------------- --------------- ---------------------------------------- rsize,wsize 1048576 NFS mount options for data block size protocol tcp NFS mount options for protocol tcp_fin_timeout 600 TCP time to wait for final packet before socket closed nfsvers 3 NFS mount options for NFS version Software Configuration and Tuning Notes --------------------------------------- Used the mount command "mount -t nfs -o nfsvers=3 11.11.11.1:/fs_1 /mnt/fs_1" in the test. The mount information is 11.11.11.1:/fs_1 on /mnt/fs_1 type nfs (rw,relatime,vers=3,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=11.11.11.1,mountvers=3,mountport=2050,mountproto=udp,local_lock=none,addr=11.11.11.1). Service SLA Notes ----------------- None Storage and Filesystems ======================= Item Stable No Description Data Protection Storage Qty ---- ------------------------------------- ------------------ -------- ----- 1 900GB SSD Drives used for data; 4x 15 RAID-5 Yes 60 RAID5-9 groups including 4 coffer disk; 2 1 800GiB NVMe used for system data RAID-1 Yes 4 for one controller;7GiB on the 4 coffer disks and 800GiB NVMe to be a RAID1 group; Number of Filesystems 32 Total Capacity 28800GiB Filesystem Type thin Filesystem Creation Notes ------------------------- The file system block size was 8KB. Storage and Filesystem Notes ---------------------------- Used one engine of OceanStor 6800F V5 in the test. And one engine included four controllers. 4 disk enclosure were connnected to the engine. Each disk enclosure used 15 900GB SSD disks. 15 disks in one disk enclosure were created to a storage pool. In the storage pool 8 filesystems were created. And for the 8 filesystems of one storage pool, each controller included 2 filesystems. RAID5-9 was 8+1. The RAID5-9 was on the each stripe and all the stripe was distributed in the all the 15 drives by specifical algorithm. For example, stripe 1 was RAID5-9 from disk1 to disk8. And stripe 2 was from disk2 to disk9. All the stripe was just like this. Transport Configuration - Physical ================================== Item Number of No Transport Type Ports Used Notes ---- --------------- ---------- ----------------------------------------------- 1 10GbE 32 For the client-to-storage network, client connected the storage directly. No switch was used.There were 32 10GbE connections totally,communicating with NFSv3 over TCP/IP to 8 clients. Transport Configuration Notes ----------------------------- Each controller used 2 10GbE card and each 10GbE card included 4 port. Totally 8 10GbE port were used for each controller for data transport connectivity to clients. Totally 32 ports for the 8 clients and 32 ports for 4 storage controller were used and the clients connected the storage directly. The 4 controller interconnect used PCIe to be HA pairs. Switches - Physical =================== Total Used Item Port Port No Switch Name Switch Type Count Count Notes ---- -------------------- --------------- ------ ----- ------------------------ 1 None None None None None Processing Elements - Physical ============================== Item No Qty Type Location Description Processing Function ---- ---- -------- -------------- ------------------------- ------------------- 1 8 CPU Storage Intel(R) Xeon(R) Gold NFS, TCP/IP, RAID Controller 5120T @ 2.20GHz, 14 core and Storage Controller functions 2 16 CPU Client Intel(R) Xeon(R) CPU NFS Client, Suse E5-2670 v3 @ 2.30GHz Linux OS Processing Element Notes ------------------------ Each OceanStor 6800F V5 Storage Controller contains 2 Intel(R) Xeon(R) Gold 5120T @ 2.20GHz processor. Each client contains 2 Intel(R) Xeon(R) CPU E5-2670 v3 @ 2.30GHz processor. Memory - Physical ================= Size in Number of Description GiB Instances Nonvolatile Total GiB ------------------------- ---------- ---------- ------------ ------------ Main Memory for each 1024 4 V 4096 OceanStor 6800F V5 Storage Controller Memory for each client 128 8 V 1024 Grand Total Memory Gibibytes 5120 Memory Notes ------------ Main memory in each storage controller was used for the operating system and caching filesystem data including the read and write cache. Stable Storage ============== 1.There are three ways to protect data. For the disk failure, OceanStor 6800F V5 Storage uses RAID to protect data. For the controller failure, OceanStor 6800F V5 Storage uses cache mirror which data also wrote to the another controller's cache. And for power failure, there are BBUs to supply power for the storage to flush the cache data to disks. 2.No persistent memory were used in the storage, The BBUs could supply the power for the failure recovery and the 1 TiB memory for each controller included the mirror cache. The data was mirrored between controllers. 3.The write cache was less than 800GiB, so the 800GiB of NVMe drive could cover the user write data. Solution Under Test Configuration Notes ======================================= None Other Solution Notes ==================== None Dataflow ======== Please reference the configuration diagram. 8 clients were used to generate the workload; 1 client acted as Prime Client to control the 7 other clients. Each client had 4 ports and each port connected to each controller. And each port mounted one filesystem for each controller. Totally for one client, 4 filesystems were mounted. Other Notes =========== There were two SAS IO module in the engine. 2 controllers shared 1 SAS IO module. 1 disk enclosure had 2 connections to the engine, one connection to one SAS IO module. Other Report Notes ================== None =============================================================================== Generated on Wed Mar 13 16:39:02 2019 by SpecReport Copyright (C) 2016-2019 Standard Performance Evaluation Corporation