SPECstorage™ Solution 2020_swbuild Result

Copyright © 2016-2021 Standard Performance Evaluation Corporation

Oracle SPECstorage Solution 2020_swbuild = 504 Builds
Oracle ZFS Storage ZS9-2 HE Eight Drive Enclosure Hybrid Storage System Overall Response Time = 1.02 msec


Performance

Business
Metric
(Builds)
Average
Latency
(msec)
Builds
Ops/Sec
Builds
MB/Sec
210.2141050092
420.23921000177
630.24831501262
840.25042001346
1050.27852502431
1260.30863002516
1470.31473503600
1680.34084003685
1890.33994503770
2100.379105004854
2310.392115504939
2520.4181260051024
2730.4611365051109
2940.5031470061194
3150.5771575061278
3360.5991680071363
3570.6361785071447
3780.7891890071532
3990.9921995081617
4201.3492100091702
4412.2512204611786
4624.1032309551871
4836.5982404011947
5049.3632460131995
Performance Graph


Product and Test Information

Oracle ZFS Storage ZS9-2 HE Eight Drive Enclosure Hybrid Storage System
Tested byOracle
Hardware AvailableOctober 5, 2021
Software AvailableNovember 10, 2021
Date TestedNovember 2021
License Number00073
Licensee LocationsRedwood Shores, CA, USA

The Oracle ZFS Storage ZS9-2 High-End (HE) system is a cost-effective, unified storage system that is ideal for performance-intensive, dynamic workloads. This enterprise-class storage system offers both NAS and SAN capabilities with industry-leading Oracle Database integration, in a highly available, clustered configuration. The Oracle ZFS Storage ZS9-2 HE provides simplified configuration, management, and industry-leading storage Analytics. The performance-optimized platform leverages specialized read and write flash caching devices in the hybrid storage pool configuration, optimizing high-performance throughput and latency. The clustered Oracle ZFS Storage ZS9-2 HE system scales to 2.0TB Memory per controller, includes 32 CPU cores per controller and 20 PB of disk storage. The Oracle ZFS Storage Appliance delivers excellent value with integrated data services for file and block-level protocols with connectivity over 32GB FC, 100GbE, 40GbE, 25GbE, and 10GbE. Data services include 5 levels of compression, deduplication, encryption, snapshots, and replication. An advanced data integrity architecture and four RAID redundancy options optimized for different workloads provide a strong data protection foundation

Solution Under Test Bill of Materials

Item NoQtyTypeVendorModel/NameDescription
12Storage ControllerOracleOracle ZFS Storage ZS9-2 HEOracle ZFS Storage ZS9-2 HE, 2 x 32-Core 2.6GHz Xeon Platinum-8358 CPU. 2048 GB DDR4-3200 DIMM. 2 x 3.84TB NVME Intel SSD boot drives.
264MemoryOracleOracle ZFS Storage ZS9-2 HEOracle ZFS Storage ZS9-2, 32 x 64GB DDR4-3200 DIMM. Memory is order configurable, a total of 2048GB was installed in each storage controller.
38Storage Drive EnclosureOracleOracle Storage Drive Enclosure DE3-24C24 drive slot enclosure, SAS3 connected, 20 x 14TB Western Digital 7200 RPM - SAS-3 Disk Drive, 2 x Samsung 200GB - SAS-3 SSD, 2 x Samsung 7.68TB - SAS-3 SSD. Dual PSU.
4160SAS3 HDDOracleWDC W7214A520ORA014T14TB Western Digital 7200 RPM - SAS-3 Disk Drive. Drive selection is order configurable. A total of 160 x 14TB Western Digital 7200 RPM - SAS-3 Disk Drive drives were installed across eight Oracle Storage Drive Enclosure DE3-24C.
516SAS3 SSDOracleSamsung MZILT960HBHQSamsung 200GB - SAS-3 Solid State Disk. Drive selection is order configurable, a total of 16 x Samsung 200GB - SAS-3 Solid State Disk drives were installed across eight Oracle Storage Drive Enclosure DE3-24C. These Drives are used for write accelerators.
616SAS3 SSDOracleSamsung MZILT6HALASamsung 7.68TB - SAS-3 Solid State Disk. Drive selection is order configurable, a total of 16 x Samsung 7.68TB - SAS-3 Solid State Disk drives were installed across eight Oracle Storage Drive Enclosure DE3-24C. These drives are used for ZFS L2 Adaptive Replacement Cache.
74ClientOracleOracle x8-2Oracle x8-2 Client Node, 2 x Intel Xeon Platinum-8260 24-Core, 2.4GHz processors. 256GB RAM. 2 x (2 x 100GbE). Used for benchmark load generation. One is used for the prime client.
84OS DriveOracleHGST H101812SFSUN1.2THGST 1.2TB - 10000 RPM - SAS-3 Disk Drive. 2 x 1.2TB - 10000 RPM - SAS-3 Disk Drive, one each of the disk drives in the Oracle x8-2 Client Node was installed for OS boot drive.
91SwitchAristaArista DCS-7060CX-32SArista DCS-7060CX-32S, high-performance, low-latency 100/40/10/25/50 Gb/sec Ethernet switch.
1012Network Interface CardOracleDual 100-Gigabit SFP28 EthernetMellanox ConnectX-5 VPI Dual Port QSFP28 100GbE Ethernet HBA, two in each ZS9-2 controller and two in each x8-2 Client (Oracle Part Number 7603663). Can be ordered at the same time when ordering the Oracle ZS9-2 controllers.
111SwitchCDWNetgear Gigabit Switch GS724Tv4Netgear Gigabit Switch GS724Tv4 switch is used for the management and configuration network only.

Configuration Diagrams

  1. Oracle ZFS Storage ZS9-2 HE Storage Pool Diagram
  2. Oracle ZFS Storage ZS9-2 HE Filesystem and Network Configuration

Component Software

Item NoComponentTypeName and VersionDescription
1Oracle ZFS StorageStorage Controller OS8.8.39Oracle ZFS Storage OS Firmware.
2SolarisWorkload Client OSSolaris (11.4-11.4.37.0.1.101.1 )Workload Client Operating System

Hardware Configuration and Tuning - Physical

Oracle ZFS Storage ZS9-2 HE
Parameter NameValueDescription
MTU9000Network Jumbo Frames
Oracle x8-2 Client Node
Parameter NameValueDescription
MTU9000Network Jumbo Frames

Hardware Configuration and Tuning Notes

The System Under Test has 100GbE Ethernet ports set to MTU 9000.

Software Configuration and Tuning - Physical

Oracle x8-2 Client Nodes
Parameter NameValueDescription
vers3NFS mount option set to version 3
rsize, wsize1048576NFS mount option for the read and write buffer size.
forcedirectioforcedirectioNFS mount option set to forcedirectio
rpcmod:clnt_max_conns4Increases the number of NFS client connections from default of 1 to 4.

Software Configuration and Tuning Notes

Best practices settings for network and NFS client and the Oracle ZFS Storage using the 100GbE Ethernet for optimized performance includes setting the workload client NFS mounts of the Oracle to use forcedirectio, and read and write NFS buffer sizes to 1048576 bytes each.

Service SLA Notes

None

Storage and Filesystems

Item NoDescriptionData ProtectionStable StorageQty
1152 x 14TB HDD Oracle ZFS Storage ZS9-2 HE Data Pool DrivesRAID-10Yes152
216 x 200GB SSD Oracle ZFS Storage ZS9-2 HE Log DrivesNoneYes16
3Samsung 7.68TB - SAS-3 Solid State Disk ZS9-2 HE Read Cache DrivesNoneYes16
43.84TB NVME Intel SSD Oracle ZFS Storage ZS9-2 HE OS DrivesMirroredNo4
5HGST 1.2TB - 10000 RPM - SAS-3 Disk Drive x8-2 Client Node OS DrivesNoneNo4
Number of Filesystems2
Total Capacity967 TiB
Filesystem TypeZFS

Filesystem Creation Notes

Two ZFS storage pools are created overall in the System Under Test (one storage pool and ZFS filesystem per Oracle ZFS Storage ZS9-2 HE controller). Each storage pool is configured with 76 HDD drives, 8 write accelerator SSDs (log devices), 8 L2 Adaptive Replacement Cache SSDs (read cache), and 4 hot spare HDDs. The storage pools are configured via the administrative browser interface. Each storage controller assigns half the disk drives, log devices, and cache devices to be used. Next, the storage pools profiles are set to mirror (RAID-10) across the 76 data HDDs drives. The log profile and cache device profiles used are set to be striped. The log construct in each storage pool is the ZFS Intent Log (ZIL) for the pool. The cache construct is the ZFS L2 Adaptive Replacement Cache for the pool. Each storage pool is also configured as a ZFS filesystem which is then configured with 16 ZFS filesystem shares. Since each Oracle ZFS Storage ZS9-2 HE controller is configured with one storage pool and is a ZFS filesystem and contains 16 ZFS filesystem shares, the System Under Test has 32 ZFS filesystem shares (32 NFS shares). There are 2 internal mirrored system disk drives per Oracle ZFS Storage ZS9-2 HE controller and are used only for the controllers NAS operating system. These drives are exclusively used for the NAS Firmware and do not cache or store user data. The Oracle x8-2 workload clients mount the 8 NFS shares across 2 100Gbe networks such that each client accesses the 8 NFS shares via 2 network paths over 8 NFS mounts (see Oracle ZFS Storage ZS9-2 HE Filesystem and Network Configuration Diagram).

Storage and Filesystem Notes

All filesystems on both Oracle ZFS Storage ZS9-2 HE controllers are created with a setting of the Database Record Size of 128KB. The logbias setting is set to latency (the default value) for each filesystem. These standard settings are controlled through the Oracle ZFS Storage administration using the Administration browser or cli interfaces.

Transport Configuration - Physical

Item NoTransport TypeNumber of Ports UsedNotes
1100GbE Ethernet8Each Oracle ZFS Storage ZS9-2 HE Controller is networked via 4 x 100GbE Ethernet physical ports for data.
210GbE Ethernet2Each Oracle ZFS Storage ZS9-2 HE Controller uses 1 x 10GbE Ethernet physical port for NAS configuration and management.
3100GbE Ethernet8Each Oracle x8-2 Client Node uses is networked via 2 x 100GbE Ethernet physical ports for data.
410GbE Ethernet4Each Oracle x8-2 Client Node uses 1 x 10GbE Ethernet physical port for configuration and management.

Transport Configuration Notes

Each Oracle ZFS Storage controller uses 4 x 100 GbE Ethernet ports for a total of 8 x 100 GbE ports. In the event of a controller failure, IP address will be taken over by the surviving controller. All 100 GbE ports are set to MTU 9000. There is 1 x 10GbE port per controller assigned to the administration interface, this interface is only used to manage the controller and does not take part in data services in the System Under Test.

Switches - Physical

Item NoSwitch NameSwitch TypeTotal Port CountUsed Port CountNotes
1Arista DCS-7060CX-32S100/40/10/25/50 Gb/sec Ethernet Switch3216All ports set for MTU 9000. Port count based on 100Gbe.
2Netgear Gigabit Switch GS724Tv410/100/1000 Mb/sec Ethernet Switch266Only used for Management and Configuration of SUT

Processing Elements - Physical

Item NoQtyTypeLocationDescriptionProcessing Function
14CPUOracle ZFS Storage ZS9-2 HE2 x 32-Core 2.6GHz Xeon Platinum 8358 CPUZFS, TCP/IP, RAID/Storage Drivers, NFS
28CPUOracle x8-2 Client Node2 x 24-Core 2.4GHz Xeon Platinum 8260 processorsTCP/IP, NFS

Processing Element Notes

Each Oracle ZFS Storage ZS9-2 HE controller contains 2 physical processors, each with 32 processing cores. Oracle x8-2 client contains 2 physical processors, each with 24 processing cores.

Memory - Physical

DescriptionSize in GiBNumber of InstancesNonvolatileTotal GiB
Memory in Oracle ZFS Storage ZS9-2 HE 20482V4096
Memory in Oracle x8-2 clients 2564V1024
Grand Total Memory Gibibytes5120

Memory Notes

The Oracle ZFS Storage controllers' main memory is used for the ZFS Adaptive Replacement Cache (ARC is a read data cache), as well as operating system memory.Oracle x8-2 client memory is not used for storage or cache of the Oracle ZFS Storage ZS9-2 HE controllers, only for the client OS.

Stable Storage

The Stable Storage requirement is guaranteed by the ZFS Intent Log (ZIL) which logs writes and other filesystem changing transactions to the stable storage of write flash accelerator SSDs or HDDs depending on the configuration. The System Under Test uses write flash accelerator SSDs. Writes and other filesystem changing transactions are not acknowledged until the data is written to stable storage. The Oracle ZFS Storage Appliance is an active-active cluster high availability system. In the event of a controller failure or power loss, each controller can take over for the other. Write flash accelerator SSDs and/or HDDs are located in shared disk shelves and can be accessed via the 16 backend SAS-3 channels from both controllers, the remaining active controller can complete any outstanding transactions using the ZIL. In the event of power loss to both controllers, the ZIL is used after power is restored to reinstate any writes and other filesystem changes.

Solution Under Test Configuration Notes

The System Under Test is the Oracle ZFS Storage ZS9-2 High End in an active-active failover configuration.

Other Solution Notes

None

Dataflow

Please reference the System Under Test diagram. The 4 Oracle x8-2 workload clients are used for benchmark load generation. The Oracle x8-2 workload clients each mount 8 of the total 32 filesystem shares provided by the Oracle ZFS Storage cluster via NFSv3. Sixteen filesystem shares are shared from each Oracle ZFS Storage controller. Each of the two Oracle ZFS Storage controllers has 4 x 100GbE Ethernet ports for data service, all ports are assigned separate networks. Each Oracle x8-2 workload client has 4 x 100GbE Ethernet ports only 2 ports are used, mounting 8 NFS shares per client over two networks. This is in the Oracle ZFS Storage ZS9-2 HE Filesystem and Network Configuration diagram.

Other Notes

Oracle and ZFS are registered trademarks of Oracle Corporation in the U.S. and/or other countries. Intel and Xeon are registered trademarks of the Intel Corporation in the U.S. and/or other countries.

Other Report Notes

The test sponsor at tests, as of the date of publication, that CVE-2017-5754 (Meltdown), CVE-2017-5753 (Spectre variant 1), and CVE-2017-5715 (Spectre variant 2) patches are disabled in the system as tested and documented. There is support turning this protection off or on, it is disabled (off) for this System Under Test which is also the product default. These mitigations may be enabled through the standard administrative interface. There may be performance impacts when these patches are enabled.


Generated on Sun Dec 19 21:27:00 2021 by SpecReport
Copyright © 2016-2021 Standard Performance Evaluation Corporation