SPECstorage(TM) Solution 2020_eda_blended Result Sangfor Technologies : Sangfor Unified Storage F8000 with 2 nodes Inc. SPECstorage Solution = 2500 Job_Sets (Overall Response Time = 0.47 msec) 2020_eda_blended =============================================================================== Performance =========== Business Average Metric Latency Job_Sets Job_Sets (Job_Sets) (msec) Ops/Sec MB/Sec ------------ ------------ ------------ ------------ 250 0.1 112506 1814 500 0.2 225012 3631 750 0.2 337519 5446 1000 0.2 450025 7261 1250 0.3 562531 9076 1500 0.3 675037 10892 1750 0.4 787544 12706 2000 0.5 900045 14523 2250 0.9 1012554 16339 2500 2.5 1125050 18151 =============================================================================== Product and Test Information ============================ +---------------------------------------------------------------+ | Sangfor Unified Storage F8000 with 2 nodes | +---------------------------------------------------------------+ Tested by Sangfor Technologies Inc. Hardware Available October 2025 Software Available October 2025 Date Tested September 2025 License Number 7066 Licensee Locations Shenzhen, Guangdong Province, China This product provides unified hosting for all types of services, enabling global business hosting with worry-free architecture evolution. It features unified management of both hot and cold data, eliminating the need to differentiate between fast and slow media, thus ensuring a consistent business experience. Furthermore, it offers unified storage for data of any size, allowing for flexible on-demand expansion while optimizing storage costs. This solution is designed to simplify data processing and management, enhance operational efficiency, and help businesses effectively adapt to changing market demands. Solution Under Test Bill of Materials ===================================== Item No Qty Type Vendor Model/Name Description ---- ---- ---------- ---------- ---------- ----------------------------------- 1 1 Storage Sangfor Sangfor The storage configuration consisted System Unified of 1 Sangfor Unified Storage F8000 Storage HA pair (2 controller nodes in total). A Sangfor Unified Storage F8000 HA pair can accommodate up to 24 NVMe SSDs. 2 6 Network NVIDIA NVIDIA 2-port 200Gb/s InfiniBand Network Interface ConnectX-7 Card, 1 card per controller,1 card Card 200GbE/NDR per client 200 3 2 Network Mellanox CX4121A 2-port 25GbE Network Card, 1 card Interface per controller. Card 4 1 Switch Mellanox Mellanox Used for data connection between QM8790 clients and storage systems. 5 1 Switch Mellanox Mellanox M Used for cluster connection between SN2010-CB2 controllers. F 6 2 Client Huaqin Each client also contains 2 Intel(R) Xeon(R) Gold 5418Y CPU @2.00GHz with 24 cores, 8 DDR5 32GB DIMMs, 480GB SATA 3.2 SSD(Device Model:SAMSUNG MZ7LH480HAHQ-00005). All 2 clients are used to generate the workload, 1 is also used as Prime Client. 7 1 Client SANGFOR Each client also contains 2 Intel(R) Xeon(R) Gold 5318Y CPU @2.10GHz with 24 cores, 8 DDR4 32GB DIMMs, 480GB SATA 3.2 SSD(Device Model:SAMSUNG MZ7L3480HCHQ-00B7C).Clients are used to generate the workload. 8 1 Client Supermicro Each client also contains 1 Intel(R) Xeon(R) Gold 5512U CPU @2.10GHz with 28 cores, 8 DDR5 32GB DIMMs, 1TB NVMe SSD(Device Model:SAMSUNG MZ1L2960HCJR-00A07).Clients are used to generate the workload. 9 24 Solid- Union UH812a TLC NVMe SSD for data storage, 24 State Memory 3.84TB SSDs in total. Drive NVMe Configuration Diagrams ====================== 1) storage2020-20250929-00140.config1.jpg (see SPECstorage Solution 2020 results webpage) Component Software ================== Item Name and No Component Type Version Description ---- ------------ ------------ ------------ ----------------------------------- 1 Linux Operating Centos Operating System(OS) for 4 clients System 8.5(kernel 4 .18.0-348.el 8.x86_64) 2 Sangfor Storage Sangfor EDS Storage System Unified System v5.3.0 Storage F8000 Hardware Configuration and Tuning - Physical ============================================ +----------------------------------------------------------------------+ | Storage | +----------------------------------------------------------------------+ Parameter Name Value Description --------------- --------------- ---------------------------------------- NA NA NA Hardware Configuration and Tuning Notes --------------------------------------- None Software Configuration and Tuning - Physical ============================================ +----------------------------------------------------------------------+ | clients | +----------------------------------------------------------------------+ Parameter Name Value Description --------------- --------------- ---------------------------------------- protocol rdma NFS mount options for protocol nfsvers 3 NFS mount options for NFS version port 20049 NFS mount options for connection port of NFS over rdma Software Configuration and Tuning Notes --------------------------------------- The Sangfor Unified Storage F8000 provides a unified file system. Once the environment is deployed, the file system is created automatically, and no additional commands are required for its creation. The command for mounting the client directory is as follws: mount -t nfs -o vers=3,rdma,port=20049 {server_ip}:/{share_name} {mount_point}. We have created 24 NFS shares directories. Due to differences in CPU processing capabilities among clients, the number of mount points used per client varies: clients equipped with two Intel Xeon Gold 5418Y processors use 15 mount points each, while the other clients use 9 mount points each, as shown in the mount points specified in the configuration file. Service SLA Notes ----------------- None Storage and Filesystems ======================= Item Stable No Description Data Protection Storage Qty ---- ------------------------------------- ------------------ -------- ----- 1 The 3.84TB NVMe drive is used as a EC Yes 24 data drive to form a storage pool of EC16+2 2 1TB NVMe SSD, 1 per controller; used none Yes 2 as boot media Number of Filesystems 1 Total Capacity 83.84TiB Filesystem Type Sangfor Phoenix Filesystem Creation Notes ------------------------- All data disks of all nodes were used when creating the file system Storage and Filesystem Notes ---------------------------- The storage configuration includes 1 Sangfor Unified Storage F8000 HA pair (2 controller nodes in total). In the following context, the terms controller or node refer to the controller nodes. The storage system utilizes a full-stack design combining "software and hardware collaboration + SDS 3.0": The underlying layer is based on NVMe SSDs, connected to integrated disk-controller servers via RDMA/NVMe-oF high-speed networks. The core comprises a distributed indexing layer (with ROW append writes, global wear leveling, and hot and cold data flow) and a persistence layer (with EC/replication and active-active metadata), providing data layout and reliability assurance. The upper layer integrates block, file, and object full-protocol gateways, and includes built-in data services such as snapshots, cloning, QoS, and remote replication. With end-to-end load balancing, this single architecture can simultaneously meet the demands of extremely low-latency small files, high-throughput large files, and mixed multi-protocol workloads. Transport Configuration - Physical ================================== Item Number of No Transport Type Ports Used Notes ---- --------------- ---------- ----------------------------------------------- 1 200Gb 12 storage system used 4 200Gb/s InfiniBand InfiniBand connections to the switch and Clients used 8 200Gb/s InfiniBand connections to the switch (2*200Gb/s InfiniBand for each client) 2 25GbE 4 2 ports per controller, bonded lacp mode for Cluster Interconnect. Transport Configuration Notes ----------------------------- storage system used 4 * 200Gb/s InfiniBand ports for data transport(2*200Gb/s InfiniBand per controller), each client connect to the switch used 2* 200Gb/s InfiniBand ports. Used 2*25GbE(lacp mode) ports for cluster interconnect. Switches - Physical =================== Total Used Item Port Port No Switch Name Switch Type Count Count Notes ---- -------------------- --------------- ------ ----- ------------------------ 1 Mellanox QM8790 200Gb 40 12 storage system used 4 InfiniBand connections,2 ports per controller node. clients used 8 connections, 2 ports per client 2 Mellanox 25GbE 18 4 storage system used 4 MSN2010-CB2F connections,2 ports per controller node Processing Elements - Physical ============================== Item No Qty Type Location Description Processing Function ---- ---- -------- -------------- ------------------------- ------------------- 1 2 CPU Storage Intel(R) Xeon(R) Platinum NFS, RDMA, and Controller 8592+ CPU @1.9GHz with 64 Storage Controller cores functions 2 4 CPU Clients Intel(R) Xeon(R) Gold NFS Client, Linux 5418Y CPU @2.00GHz with OS 24 cores 3 2 CPU Clients Intel(R) Xeon(R) Gold NFS Client, Linux 5318Y CPU @2.10GHz with OS 24 cores 4 1 CPU Clients Intel(R) Xeon(R) Gold NFS Client, Linux 5512U CPU @2.10GHz with OS 28 cores Processing Element Notes ------------------------ Each controller node contains 1 Intel(R) Xeon(R) Platinum 8592+ processor with 64 cores each;1.9GHZ. 2 clients each contains 2 Intel(R) Xeon(R) Gold 5418Y processors with 24 cores each;2.00GHz. 1 client contains 2 Intel(R) Xeon(R) Gold 5318Y processors wich 24 cores each;2.10GHz. 1 client contains 1 Intel(R) Xeon(R) Gold 5512U processor wich 28 cores each;2.10GHz. Memory - Physical ================= Size in Number of Description GiB Instances Nonvolatile Total GiB ------------------------- ---------- ---------- ------------ ------------ The memory of one 256 2 V 512 controller node in the Sangfor Unified Storage F8000 HA pair Memory for each of 4 256 4 V 1024 clients Grand Total Memory Gibibytes 1536 Memory Notes ------------ None Stable Storage ============== Sangfor Unified Storage does not use any internal memory to temporarily cache write data to the underlying storage system, all writes are directly submitted to the disk, and protected via Sangfor Unified Storage distributed data protection (16+2 in this case). There is no need for any RAM battery protection. Sangfor Unified Storage is an active active cluster high availability system, where SSDs can be accessed from two controllers through dual ports. In the event of a controller failure or power outage, each controller can take over the other and continue accessing data. Solution Under Test Configuration Notes ======================================= 1. Front-end and back-end I/O collaboration: Phoenix InFlash intelligent I/O technology deeply integrates the characteristics of flash memory to reduce service latency. 2. SDPC (Software Defined Persistent Cache) uses write-through technology to write cached data directly to the Persistent Memory Region (PMR) of the NVMe flash drive, with triple replication. This eliminates the risk of data loss during single-controller operation. It eliminates the need for a Battery Back Up, simplifying operations and maintaining high reliability. Other Solution Notes ==================== None Dataflow ======== Please reference the configuration diagram. Four clients were used to generate the workload; one of the clients also acted as the Prime Client to manage the other three workload clients. Each client was connected to the Mellanox switch via two 200Gb/s InfiniBand links. Two controller nodes were deployed, each connected to the data switch through two 200Gb/s InfiniBand links. and the clients mounted the shared directories using the NFSv3 protocol. The cluster provided access to the file system through all four 200Gb/s InfiniBand ports connected to the data switch. Other Notes =========== All servers have been installed with Spectre/Meltdown patches to address potential data leakage risks. Other Report Notes ================== SANGFOR is a registered trademark of Sangfor Technologies Inc., Sangfor Technologies Building, No.16 Xiandong Road, Xili Street, Nanshan District, Shenzhen City, Guangdong Province 518055, China =============================================================================== Generated on Tue Oct 14 15:09:59 2025 by SpecReport Copyright (C) 2016-2025 Standard Performance Evaluation Corporation