SPEC SFS®2014_vda Result

Copyright © 2016-2019 Standard Performance Evaluation Corporation

IBM Corporation SPEC SFS2014_vda = 1700 Streams
IBM DeepFlash 150 with Spectrum Scale 4.2.1 Overall Response Time = 4.12 msec


Performance

Business
Metric
(Streams)
Average
Latency
(msec)
Streams
Ops/Sec
Streams
MB/Sec
1701.5701700784
3401.82034011568
5101.94051022355
6802.34068033134
8502.74085033917
10202.730102044713
11902.830119045491
13603.210136056281
15303.930153067065
170030.000169807821
Performance Graph


Product and Test Information

IBM DeepFlash 150 with Spectrum Scale 4.2.1
Tested byIBM Corporation
Hardware AvailableJuly 2016
Software AvailableJuly 2016
Date TestedAugust 2016
License Number11
Licensee LocationsAlmaden, CA USA

IBM DeepFlash 150 provides an essential big-data building block for petabyte-scale, cost-constrained, high-density and high-performance storage environments. It delivers the response times of an all flash array with extraordinarily competitive cost benefits. DeepFlash 150 is an ideal choice to accelerate systems of big data and other workloads requiring high performance and sustained throughput.

IBM Spectrum Scale provides unified file and object software-defined storage for high performance, large scale workloads on-premises or in the cloud. When deployed together, DeepFlash 150 and IBM Spectrum Scale create a storage solution that provides optimal workload flexibility, an extraordinary low-cost-to-performance ratio, and the data lifecycle management and storage services required by enterprises grappling with high-volume, high-velocity data challenges.

Solution Under Test Bill of Materials

Item NoQtyTypeVendorModel/NameDescription
12DeepFlash 150IBM9847-IF2Each DeepFlash 150 includes 64 Flash module storage slots. In this particular model half of the slots are filled, each with a 8 TB Flash module.
212Spectrum Scale NodesLenovox3650-M4Spectrum Scale client and server nodes. Lenovo model number 7915D3x.
32InfiniBand SwitchMellanoxSX603636-port non-blocking managed 56 Gbps InfiniBand/VPI SDN switch.
41Ethernet SwitchSMC NetworksSMC8150L250-port 10/100/1000 Gbps Ethernet switch.
520InfiniBand AdapterMellanoxMCX456A-F2-port PCI FDR InfiniBand adapter used in the Spectrum Scale client nodes.
64InfiniBand AdapterMellanoxMCX354A-FCBT2-port PCI FDR InfiniBand adapter used in the Spectrum Scale server nodes.
72Host Bus AdapterAvago TechnologiesSAS 9300-8e2-port PCI 12 Gbps SAS adapter used in one of the Spectrum Scale server nodes for attachment to the Deep Flash 150.
82Host Bus AdapterAvago TechnologiesSAS 9305-16e4-port PCI 12 Gbps SAS adapter used in one of the Spectrum Scale server nodes for attachment to the Deep Flash 150.

Configuration Diagrams

  1. Solution Under Test Diagram

Component Software

Item NoComponentTypeName and VersionDescription
1Spectrum Scale NodesSpectrum Scale File System4.2.1The Spectrum Scale File System is a distributed file system that runs on both the server nodes and client nodes to form a cluster. The cluster allows for the creation and management of single namespace file systems.
2Spectrum Scale NodesOperating SystemRed Hat Enterprise Linux 7.2 for x86_64The operating system on the client nodes was 64-bit Red Hat Enterprise Linux version 7.2.
3DeepFlash 150Storage Server2.1.2The software runs on the IBM DeepFlash 150 and is installed with the included DFCLI tool.

Hardware Configuration and Tuning - Physical

Spectrum Scale Client Nodes
Parameter NameValueDescription
verbsPortsmlx5_0/1/1 mlx5_1/1/2InfiniBand device names and port numbers.
verbsRdmaenableEnables InfiniBand RDMA transfers between Spectrum Scale client nodes and server nodes.
verbsRdmaSend1Enables the use of InfiniBand RDMA for most Spectrum Scale daemon-to-daemon communication.
Hyper-ThreadingdisabledDisables the use of two threads per core in the CPU. The setting was changed in the BIOS menus of the client nodes.
Spectrum Scale Server Nodes
Parameter NameValueDescription
verbsPortsmlx4_0/1/1 mlx4_0/2/2 mlx4_1/1/1 mlx4_1/2/2InfiniBand device names and port numbers.
verbsRdmaenableEnables InfiniBand RDMA transfers between Spectrum Scale client nodes and server nodes.
verbsRdmaSend1Enables the use of InfiniBand RDMA for most Spectrum Scale daemon-to-daemon communication.
schedulernoopSpecifies the I/O scheduler used for the DeepFlash 150 block devices.
nr_requests32Specifies the I/O block layer request descriptors per request queue for DeepFlash 150 block devices.
Hyper-ThreadingdisabledDisables the use of two threads per core in the CPU. The setting was changed in the BIOS menus of the server nodes.

Hardware Configuration and Tuning Notes

The first three configuration parameters were set using the "mmchconfig" command on one of the nodes in the cluster. The verbs settings in the table above allow for efficient use of the InfiniBand infrastructure. The settings determine when data are transferred over IP and when they are transferred using the verbs protocol. The InfiniBand traffic went through two switches, item 3 in the Bill of Materials. The block device parameters "scheduler" and "nr_requests" were set on the server nodes with echo commands for each DeepFlash device. The parameters can be found at "/sys/block/DEVICE/queue/{scheduler,nr_requests}", where DEVICE is the block device name. The last parameter disabled Hyper-Threading on the client and server nodes.

Software Configuration and Tuning - Physical

Spectrum Scale - All Nodes
Parameter NameValueDescription
ignorePrefetchLUNCountyesSpecifies that only maxMBpS and not the number of LUNs should be used to dynamically allocate prefetch threads.
maxblocksize1MSpecifies the maximum file system block size.
maxMBpS10000Specifies an estimate of how many megabytes of data can be transferred per second into or out of a single node.
maxStatCache0Specifies the number of inodes to keep in the stat cache.
numaMemoryInterleaveyesEnables memory interleaving on NUMA based systems.
pagepoolMaxPhysMemPct90Percentage of physical memory that can be assigned to the page pool
scatterBufferSize256KSpecifies the size of the scatter buffers.
workerThreads1024Controls the maximum number of concurrent file operations at any one instant, as well as the degree of concurrency for flushing dirty data and metadata in the background and for prefetching data and metadata.
Spectrum Scale - Server Nodes
Parameter NameValueDescription
nsdBufSpace70Sets the percentage of the pagepool that is used for NSD buffers.
nsdMaxWorkerThreads3072Sets the maximum number of threads to use for block level I/O on the NSDs.
nsdMinWorkerThreads3072Sets the minimum number of threads to use for block level I/O on the NSDs.
nsdMultiQueue64Specifies the maximum number of queues to use for NSD I/O.
nsdThreadsPerDisk3Specifies the maximum number of threads to use per NSD.
nsdThreadsPerQueue48Specifies the maximum number of threads to use per NSD I/O queue.
nsdSmallThreadRatio1Specifies the ratio of small thread queues to small thread queues.
pagepool80GSpecifies the size of the cache on each node. On server nodes the page pool is used for NSD buffers.
Spectrum Scale - Client Nodes
Parameter NameValueDescription
pagepool16GSpecifies the size of the cache on each node.

Software Configuration and Tuning Notes

The configuration parameters were set using the "mmchconfig" command on one of the nodes in the cluster. The parameters listed in the table above reflect values that might be used in a typical streaming environment with Linux nodes.

Service SLA Notes

There were no opaque services in use.

Storage and Filesystems

Item NoDescriptionData ProtectionStable StorageQty
164 7 TB LUNs from two DeepFlash 150 systems.Spectrum Scale synchronous replicationYes64
2300 GB 10K mirrored HDD pair in Spectrum Scale client nodes used to store the OS.RAID-1No10
Number of Filesystems1
Total Capacity245 TiB
Filesystem TypeSpectrum Scale File System

Filesystem Creation Notes

A single Spectrum Scale file system was created with a 1 MiB block size for data and metadata, a 4 KiB inode size, and a 32 MiB log size, 2 replicas for data and metadata, and "relatime". The file system was spread across all of the Network Shared Disks (NSDs).

The client nodes each had an ext4 file system that hosted the operating system.

Storage and Filesystem Notes

Each DeepFlash presented 32 JBOF LUNs to one of the server nodes. An NSD was created from each LUN. All of the NSDs attached to the first server node were placed in a failure group. All of the NSDs attached to the second server node were placed in a second failure group. The file system was configured with 2 data and 2 metadata replicas. Therefore a copy of all data and metadata was present on each DeepFlash 150.

The cluster used a two-tier architecture. The client nodes perform file-level operations. The data requests are transmitted to the server nodes. The server nodes perform the block-level operations. In Spectrum Scale terminology the load generators are NSD clients and the server nodes are NSD servers. The NSDs were the storage devices specified when creating the Spectrum Scale file system.

Transport Configuration - Physical

Item NoTransport TypeNumber of Ports UsedNotes
11 GbE cluster network12Each node connects to a 1 GbE administration network with MTU=1500
2FDR InfiniBand cluster network28Client nodes have 2 FDR links, and each server node has 4 FDR links to a shared FDR IB cluster network

Transport Configuration Notes

The 1 GbE network was used for administrative purposes. All benchmark traffic flowed through the Mellanox SX6036 InfiniBand switches. Each client node had two active InfiniBand ports. Each server node had four active InfiniBand ports. Each client node InfiniBand port was on a separate FDR fabric for RDMA connections between nodes.

Switches - Physical

Item NoSwitch NameSwitch TypeTotal Port CountUsed Port CountNotes
1SMC 8150L210/100/1000 Gbps Ethernet5012The default configuration was used on the switch.
2Mellanox SX6036 #1FDR InfiniBand3614The default configuration was used on the switch.
3Mellanox SX6036 #2FDR InfiniBand3614The default configuration was used on the switch.

Processing Elements - Physical

Item NoQtyTypeLocationDescriptionProcessing Function
120CPUSpectrum Scale client nodesIntel(R) Xeon(R) CPU E5-2630 v2 @ 2.60GHz 6-coreSpectrum Scale client, load generator, device drivers
24CPUSpectrum Scale server nodesIntel Xeon CPU E5-2630 v2 @ 2.60GHz 6-coreSpectrum Scale NSD server, device drivers

Processing Element Notes

Each of the Spectrum Scale client nodes had 2 physical processors. Each processor had 6 cores with one thread per core.

Memory - Physical

DescriptionSize in GiBNumber of InstancesNonvolatileTotal GiB
Spectrum Scale node system memory12812V1536
Grand Total Memory Gibibytes1536

Memory Notes

In the client nodes Spectrum Scale reserves a portion of the physical memory for file data and metadata caching. In the server nodes a portion of the physical memory is reserved for NSD buffers. A portion of the memory is also reserved for buffers used for node to node communication.

Stable Storage

Stable writes and commit operations in Spectrum Scale are not acknowledged until the NSD server receives an acknowledgment of write completion from the underlying storage system, which in this case is the DeepFlash 150. The DeepFlash 150 does not have a cache, so writes are acknowledged once the data has been written to the flash cards.

Solution Under Test Configuration Notes

The solution under test was a Spectrum Scale cluster optimized for streaming environments. The NSD client nodes were also the load generators for the benchmark. The benchmark was executed from one of the client nodes. All of the Spectrum Scale nodes were connected to a 1 GbE switch and two FDR InfiniBand switches. Each DeepFlash 150 was connected to a single server node via 4 12 Gbps SAS connections. Each server node had 2 SAS adapters. One server node had two Avago SAS 9300-8e adapters, and the other server two had two Avago SAS 9305-16e adapters.

Other Solution Notes

None

Dataflow

The 10 Spectrum Scale client nodes were the load generators for the benchmark. Each load generator had access to the single namespace Spectrum Scale file system. The benchmark accessed a single mount point on each load generator. In turn each of mount points corresponded to a single shared base directory in the file system. The NSD clients process the file operations, and the data requests to and from disk were serviced by the Spectrum Scale server nodes.

Other Notes

IBM, IBM Spectrum Scale, and IBM DeepFlash 150 are trademarks of International Business Machines Corp., registered in many jurisdictions worldwide.

Intel and Xeon are trademarks of the Intel Corporation in the U.S. and/or other countries.

Mellanox is a registered trademark of Mellanox Ltd.

Other Report Notes

None


Generated on Wed Mar 13 16:52:34 2019 by SpecReport
Copyright © 2016-2019 Standard Performance Evaluation Corporation