SPEC SFS(R)2014_eda Result Oracle : Oracle ZFS Storage ZS7-2 SPEC SFS2014_eda = 900 Job_Sets (Overall Response Time = 0.61 msec) =============================================================================== Performance =========== Business Average Metric Latency Job_Sets Job_Sets (Job_Sets) (msec) Ops/Sec MB/Sec ------------ ------------ ------------ ------------ 90 0.4 40502 726 180 0.4 81004 1452 270 0.5 121506 2179 360 0.5 162008 2904 450 0.5 202510 3630 540 0.5 243012 4357 630 0.6 283514 5083 720 0.7 324016 5810 810 0.9 364518 6537 900 1.5 405020 7263 =============================================================================== Product and Test Information ============================ +---------------------------------------------------------------+ | Oracle ZFS Storage ZS7-2 | +---------------------------------------------------------------+ Tested by Oracle Hardware Available November 13, 2018 Software Available November 13, 2018 Date Tested October 2018 License Number 00073 Licensee Locations Redwood Shores, CA, USA The Oracle ZFS Storage ZS7-2 is a high-end high-performance all-flash storage system that offers enterprise-class NAS and SAN capabilities with industry-leading Oracle Database integration, in a cost-effective high-availability configuration. The Oracle ZFS Storage ZS7-2 provides simplified set up, management, and industry-leading storage analytics. The performance-optimized platform uses specialized Read and Write Flash caching devices in the hybrid storage configuration, for high-performance throughput and latency. The Oracle ZFS Storage ZS7-2 high-end can scale to 1.5TB Memory, 48 CPU cores per controller and 3.6 PB of all-flash storage. Oracle ZFS Storage Appliances deliver economic value with bundled data services for file and block-level protocols with connectivity over 40GbE, 10GbE, InfiniBand, and 32Gb FC. Data may be managed using Compression, Deduplication, Encryption, Thin provisioning, Real-Time Analytics, Virus Scan, Snapshots, ZFS RAID Data Protection, Snapshots, Remote Replication, NDMP, and High Availability Clustering. Solution Under Test Bill of Materials ===================================== Item No Qty Type Vendor Model/Name Description ---- ---- ---------- ---------- ---------- ----------------------------------- 1 2 Storage Oracle Oracle ZFS Oracle ZFS Storage ZS7-2, 2 x Controller Storage 2.10GHz Intel Xeon ZS7-2 Platinum 8160 CPU. 1.5TB DDR4-2666 LRDIMM. 2 x 10TB SAS3 HGST boot drives. Support for SAS3, IB, 10GbE. 2 48 Memory Oracle Oracle ZFS Oracle ZFS Storage ZS7-2, 48 x 64GB Storage DDR4-2666 LRDIMM. Memory ZS7-2 is order configurable, a total of 1.5TB was installed in each storage controller. 3 6 Storage Oracle Oracle 24 drive slot enclosure, SAS3 Drive Storage connected, 24 x 3TB HGST SSD. Enclosure Drive Dual PSU. Enclosure DE3-24P 4 6 Storage Oracle Oracle 24 drive slot enclosure, SAS3 Drive Storage connected, 20 x 3TB Enclosure Drive HGST SSD and 4 X 200GB HGST SSD. Enclosure Dual PSU. DE3-24P 5 264 SAS3 SSD Oracle 7118008 3TB HGST SSD. Drive selection is order configurable, a total of 264 x 3TB HGST SSD drives were installed across all Oracle Storage Drive Enclosure DE3-24P. 6 24 SAS3 SSD Oracle 7115942 200GB HGST SSD. Drive selection is order configurable, a total of 24 x 200GB HGST SSD drives were installed across all Oracle Storage Drive Enclosure DE3-24P. These Drives are used for write flash accelerators. 7 8 Client Oracle Oracle Oracle X6-2 Client Node, 2 x X6-2 2.10GHz Intel Xeon CPU E5-2699 v4. 512GB RAM. 2 x 10GbE. Used for benchmark load generation. 8 8 OS Drive Oracle 7093013 600GB HGST hard drive. 8 x 600GB HGST hard drives, one for each Oracle X6-2 Client Node was installed for OS boot drive. 9 1 Switch Oracle Oracle Oracle Switch ES2-64, high- Switch performance, low-latency ES2-64 10/40 Gb/sec Ethernet switch. Configuration Diagrams ====================== 1) sfs2014-20181022-00048.config1.jpeg (see SPEC SFS2014 results webpage) 2) sfs2014-20181022-00048.config2.jpeg (see SPEC SFS2014 results webpage) Component Software ================== Item Name and No Component Type Version Description ---- ------------ ------------ ------------ ----------------------------------- 1 Oracle ZFS Storage 8.8 Oracle ZFS Storage OS for storage Storage Controller controllers. OS 2 Oracle Linux Client Node 7.3 Oracle Linux OS for client nodes. OS Hardware Configuration and Tuning - Physical ============================================ +----------------------------------------------------------------------+ | Oracle ZFS Storage ZS7-2 | +----------------------------------------------------------------------+ Parameter Name Value Description --------------- --------------- ---------------------------------------- MTU 9000 Network Jumbo Frames svcadm enable enabled Oracle ZFS Storage Power Service power poweradm set none Oracle ZFS Storage Power Service administrative- authority=none +----------------------------------------------------------------------+ | Oracle X6 Client Node | +----------------------------------------------------------------------+ Parameter Name Value Description --------------- --------------- ---------------------------------------- MTU 9000 Network Jumbo Frames Hardware Configuration and Tuning Notes --------------------------------------- Oracle ZFS Storage ZS7-2 controllers and Oracle X6-2 client nodes both had 10GbE Ethernet ports set up to MTU of 9000 jumbo frames. Power management which controls Intel processor power states was set to administrative-authority equal "none". Power management is controlled through the Oracle ZFS Storage ZS7-2 managment BUI. Software Configuration and Tuning - Physical ============================================ +----------------------------------------------------------------------+ | Oracle X6-2 Client Nodes | +----------------------------------------------------------------------+ Parameter Name Value Description --------------- --------------- ---------------------------------------- vers 3 NFS mount option set to version 3 rsize,wsize 1048576 NFS mount option for data block size sync sync NFS mount option set to sync io net.ipv4.tcp_rm 10000000 Linux kernel tcp send and receive em, net.ipv4.tc buffers p_wmem net.core.somaxc 65536 Linux kernel maximum socket connections onn Software Configuration and Tuning Notes --------------------------------------- Tune the communications between Oracle X6-2 client nodes and the Oracle ZFS Storage ZS7-2 controllers over the 10GbE Ethernet by optimizing amount of data transfer and minimum overhead. This includes setting the Oracle X6-2 clients mounts of the Oracle ZFS Storage ZS7-2 files systems to use sync io, read and write sizes to 1048576, along with increasing the Oracle X6-2 client send and receive buffers sizes to 10000000. Service SLA Notes ----------------- None Storage and Filesystems ======================= Item Stable No Description Data Protection Storage Qty ---- ------------------------------------- ------------------ -------- ----- 1 3.2TB SSD Oracle ZFS Storage ZS7-2 RAID-10 Yes 264 Data Pool Drives 2 200GB SSD Oracle ZFS Storage ZS7-2 None Yes 24 Log Drives 3 10TB HGST Oracle ZFS Storage ZS7-2 OS Mirrored No 4 Drives 4 600GB HGST Oracle X6-2 Client Node OS None No 8 Drives Number of Filesystems 64 Total Capacity 366TiB Filesystem Type ZFS Filesystem Creation Notes ------------------------- Two ZFS storage pools are created overall in the SUT (1 storage pool per Oracle ZFS Storage ZS7-2 controller). Each of the controller's storage pools are configured with 116 ssd drives, 24 write flash accelerator (log device) and 4 spare ssd drives. When configuring the storage pool via the administrative html interface of each Oracle ZFS Storage ZS7-2 storage controller, at the start you will be asked to select the number of disk drives and log devices to use per tray. The storage pools are set up to mirror the data (RAID-10) across all 116 data ssd drives (Note: When configuring storage pools on the Oracle ZFS Storage ZS7-2 controllers this is a data profile of Mirrored). The write flash accelerator in each storage pool is used for the ZFS Intent Log (ZIL). Each of the storage pools are configured with 32 ZFS filesystems. Since each controller is configured with 1 storage pool and each storage pool contains 32 ZFS filesystems, in total the SUT has 64 ZFS filesystems. There are 2 internal mirrored system disk drives per Oracle ZFS Storage ZS7-2 controller and are used only for the controllers core operating system. These drives are not used for data cache or storing user data. Storage and Filesystem Notes ---------------------------- All filesystems on both Oracle ZFS Storage ZS7-2 controllers are created with setting of the Database Record Size of 128KB. The logbias setting is set to latency for each filesystem. This is a common practice for storage solutions with the Oracle ZFS Storage ZS7-2 storage controllers. Transport Configuration - Physical ================================== Item Number of No Transport Type Ports Used Notes ---- --------------- ---------- ----------------------------------------------- 1 10GbE Ethernet 16 Each Oracle ZFS Storage ZS7-2 Controller uses 8x 10GbE Ethernet physical ports for dataflow 2 10GbE Ethernet 2 Each Oracle ZFS Storage ZS7-2 Controller uses 1x 10GbE Ethernet physical port for management 3 10GbE Ethernet 16 Each Oracle X6-2 Client Node uses 2x 10GbE Ethernet physical ports for dataflow 4 10GbE Ethernet 8 Each Oracle X6-2 Client Node uses 1x 10GbE Ethernet physical port for management Transport Configuration Notes ----------------------------- Each Oracle ZFS Storage ZS7-2 uses 8 active 10GbE Ethernet ports. Total Oracle ZFS Storage ZS7-2 controllers use 16 ports active. In the event of controller failure IP address will be taken over by surviving controller. All ports are setup with the MTU size of 9000 on each of the 10 GbE ports. There is 1x 10GbE port per controller assigned to the managment interface, this interface is only used manage the controller and does not take part in dataflow. The Oracle X6-2 client nodes uses 2x 10 GbE Ethernet cards each for dataflow. Each port is set to MTU of 9000. The Oracle X6-2 client nodes each use 1x 10GbE Ethernet port for managment, these interfaces are not used for dataflow. Each of the 16 active physical 10GbE Ethernet ports are assigned 6 vnic IP addresses on the Oracle X6-2 client nodes. Each of the 16 active physical 10GbE Ethernet ports on the Oracle ZFS Storage ZS7-2 are also assigned 6 vnic IP addresses. On the Oracle ZFS Storage ZS7-2, vnics are configured through the management BUI. On the Oracle X6-2 client nodes, vnics are configured in Linux OS /etc/sysconfig/network-scripts. Please reference the vnic diagram for IP layout. Switches - Physical =================== Total Used Item Port Port No Switch Name Switch Type Count Count Notes ---- -------------------- --------------- ------ ----- ------------------------ 1 Oracle Switch ES2-64 10/40GbE 46 32 All ports set up for Ethernet Switch jumbo frame support Processing Elements - Physical ============================== Item No Qty Type Location Description Processing Function ---- ---- -------- -------------- ------------------------- ------------------- 1 4 CPU Oracle ZFS 2.10GHz Intel Xeon ZFS, TCP/IP, Storage ZS7-2 Platinum 8160 CPU RAID/Storage Drivers, NFS 2 16 CPU Oracle X6-2 2.20GHz Intel Xeon CPU TCP/IP, NFS Client Node E5-2699 v4 Processing Element Notes ------------------------ Each Oracle ZFS Storage ZS7-2 controller contains 2 physical processors, each with 24 processing cores. Oracle X6-2 client contains 2 physical processors, each with 22 processing cores. Memory - Physical ================= Size in Number of Description GiB Instances Nonvolatile Total GiB ------------------------- ---------- ---------- ------------ ------------ Memory in Oracle ZFS 1500 2 V 3000 Storage ZS7-2 Memory in Oracle X6-2 768 8 V 6144 clients Grand Total Memory Gibibytes 9144 Memory Notes ------------ The Oracle ZFS Storage ZS7-2 controllers' main memory is used for the Adaptive Replacement Cache (ARC), the data cache, and operating system memory. Oracle X6-2 client memory is not used for storage or cache of the Oracle ZFS Storage ZS7-2 controllers, just for the client use. Stable Storage ============== The Stable Storage requirement is guaranteed by the ZFS Intent Log (ZIL) which logs writes and other filesystem changing transactions to either a write flash accelerator or a disk drive. Writes and other filesystem changing transactions are not acknowledged until the data is written to stable storage. Since this is an active-active cluster high availability system, in the event of a controller failing or power loss, the other active controller can take over for the failed controller. Since the write flash accelerators or disk drives are located in the disk shelves and can be accessed via the 4 backend SAS channels from both controllers, the remaining active controller can complete any outstanding transactions using the ZIL. In the event of power loss to both controllers, the ZIL is used after power is restored to reinstate any writes and other filesystem changes. Solution Under Test Configuration Notes ======================================= The system under test are Oracle ZFS Storage ZS7-2, high end storage controllers, setup in an active-active cluster configuration with failover capabilities. The Oracle X6-2 client nodes became EOL in February, 2018. For general availability third parties still sell the model with original Oracle warranty and support. In addition this model line has been refreshed and is available from Oracle as the Oracle X7-2. Other Solution Notes ==================== The non-default WARMUP_TIME of 1200 seconds was use for this benchmark run. Dataflow ======== Please reference the SUT diagram. The 8 Oracle X6-2 client nodes are used for benchmark load generation. The Oracle X6-2 client nodes each mount 8 of the total 64 ZFS filesystems of the Oracle ZFS Storage ZS7-2 controllers via NFSv3. Half of the filesystems are shared from each Oracle ZFS Storage ZS7-2 controller. Each of the two Oracle ZFS Storage ZS7-2 controllers has 8x 10GbE Ethernet active ports for io dataflow, all are assigned separate subnets. Each Oracle X6-2 client node has 2x 10GbE Ethernet ports, accessing half of its nfs mounts through each ethernet port. There is a one-to-one match between the 16 total 10GbE Ethernet client ports to the 16 total 10GbE Ethernet controller ports used for io dataflow (non-management ports). So in effect, this spreads io load evenly across the filesystems mounts, network interfaces, and storage pools of the Oracle ZFS Storage ZS7-2 Cluster SUT. Other Notes =========== Oracle and ZFS are registered trademarks of Oracle Corporation in the U.S. and/or other countries. Intel and Xeon are registered trademarks of the Intel Corporation in the U.S. and/or other countries. Other Report Notes ================== The test sponsor attests, as of date of publication, that CVE-2017-5754 (Meltdown), CVE-2017-5753 (Spectre variant 1), and CVE-2017-5715 (Spectre variant 2) are mitigated in the system as tested and documented. There is support turning this protection off, it however was enabled for this tested run. =============================================================================== Generated on Wed Mar 13 16:27:00 2019 by SpecReport Copyright (C) 2016-2019 Standard Performance Evaluation Corporation