SPEC SFS®2014_vda Result

Copyright © 2016-2019 Standard Performance Evaluation Corporation

Cisco Systems Inc. SPEC SFS2014_vda = 1810 Streams
Cisco UCS S3260 with IBM Spectrum Scale 4.2.2 Overall Response Time = 24.95 msec


Performance

Business
Metric
(Streams)
Average
Latency
(msec)
Streams
Ops/Sec
Streams
MB/Sec
18110.0701810833
36210.98036201676
54312.25054312497
72414.49072413342
90516.90090514169
108621.410108615006
126723.790126715836
144830.460144816683
162943.190162917510
181092.140181018352
Performance Graph


Product and Test Information

Cisco UCS S3260 with IBM Spectrum Scale 4.2.2
Tested byCisco Systems Inc.
Hardware AvailableNovember 2016
Software AvailableJanuary 2017
Date TestedJuly 2017
License Number9019
Licensee LocationsSan Jose, CA USA

Cisco UCS Integrated Infrastructure

Cisco Unified Computing System (UCS) is the first truly unified data center platform that combines industry- standard, x86-architecture servers with network and storage access into a single system. The system is intelligent infrastructure that is automatically configured through integrated, model-based management to simplify and accelerate deployment of all kinds of applications. The system's x86-architecture rack and blade servers are powered exclusively by Intel(R) Xeon(R) processors and enhanced with Cisco innovations. These innovations include built-in virtual interface cards (VICs), leading memory capacity, and the capability to abstract and automatically configure the server state. Cisco's enterprise-class servers deliver world-record performance to power mission-critical workloads. Cisco UCS is integrated with a standards-based, high-bandwidth, low-latency, virtualization-aware unified fabric, with a new generation of Cisco UCS fabric enabling 40 Gbps.

Cisco UCS S3260 Servers

The Cisco UCS S3260 Storage Server is a high-density modular storage server designed to deliver efficient, industry-leading storage for data-intensive workloads. The S3260 is a modular chassis with dual server nodes (two servers per chassis) and up to 60 large-form-factor (LFF) drives in a 4RU form factor.

IBM Spectrum Scale

Spectrum Scale provides unified file and object software-defined storage for high performance, large scale workloads. Spectrum Scale includes the protocols, services and performance, Technical Computing, Big Data, HDFS and business critical content repositories. IBM Spectrum Scale provides world-class storage management with extreme scalability, flash accelerated performance, and automatic policy-based storage tiering from flash through disk to tape, reducing storage costs up to 90% while improving security and management efficiency in big data and analytics environments.

Solution Under Test Bill of Materials

Item NoQtyTypeVendorModel/NameDescription
15Server ChassisCiscoUCS S3260 ChassisThe Cisco UCS S3260 Chassis can support upto two server nodes and Fifty-six drives, or 1 server node and sixty drives, in a compact 4-rack-unit (4RU) form factor, with 4 x Cisco UCS 1050W AC Power Supply
210Server node, Spectrum Scale NodeCiscoUCS S3260 M4 Server NodeCisco UCS S3260 M4 servers, each with: 2 X Intel Xeon processors E5-2680 v4 (28 cores per node), 256 GB of memory (16x16GB 2400MHz DIMMs), Cisco UCS C3000 RAID Controller w 4G RAID Cache
310System IO Controller with VIC 1300CiscoS3260 SIOCCisco UCS S3260 SIOC with integrated Cisco UCS VIC 1300, one per server node
4140Storage HDD, 8TB NL-SAS 7200 RPMCiscoUCS HD8TB8TB 7200 RPM drives for storage, fourteen per server node. Please note, on a fully populated chassis with two server nodes, we can have twenty-eight drive per server node
51Blade Server ChassisCiscoUCS 5108The Cisco UCS 5108 Blade Server Chassis features flexible bay configurations for blade servers. It can support up to eight half-width blades, up to four full-width blades, or up to two full-width double-height blades in a compact 6-rack-unit (6RU) form factor
68Blade Server, Spectrum Scale Node (Clients)CiscoUCS B200 M4UCS B200 M4 Blade Servers, each with: 2X Intel Xeon processors E5-2660 v3 (20 core per node) 256 GB of memory (16x16GB 2133MHz DIMMs)
72Fabric ExtenderCiscoUCS 2304Cisco UCS 2300 Series Fabric Extenders can support up to four 40-Gbps unified fabric uplinks per fabric extender connecting Fabric Interconnect.
88Virtual Interface CardCiscoUCS VIC 1340The Cisco UCS Virtual Interface Card (VIC) 1340 is a 2-port, 40 Gigabit Ethernet, Fibre Channel over Ethernet (FCoE)-capable modular LAN on motherboard (mLOM) mezzanine adapter.
92Fabric InterconnectCiscoUCS 6332Cisco UCS 6300 Series Fabric Interconnects support line-rate, lossless 40 Gigabit Ethernet and FCoE connectivity.
101Cisco Nexus 40Gbps SwitchCiscoCisco Nexus 9332PQThe Cisco Nexus 9332PQ Switch has 32 x 40 Gbps Quad Small Form Factor Pluggable Plus (QSFP+) ports. All ports are line rate, delivering 2.56 Tbps of throughput in a 1-rack-unit (1RU) form factor.

Configuration Diagrams

  1. Solution Under Test Diagram - Topological View
  2. Solution Under Test Diagram - Physical View

Component Software

Item NoComponentTypeName and VersionDescription
1Spectrum Scale NodesSpectrum Scale File System4.2.2The Spectrum Scale File System is a distributed file system that runs on the Cisco UCS S3260 servers to form a cluster. The cluster allows for the creation and management of single namespace file systems.
2Spectrum Scale NodesOperating SystemRed Hat Enterprise Linux 7.2 for x86_64The operating system on the Spectrum Scale nodes was 64-bit Red Hat Enterprise Linux version 7.2

Hardware Configuration and Tuning - Physical

Spectrum Scale Nodes
Parameter NameValueDescription
scaling_governorperformanceSets the CPU frequency to performance
Intel Turbo BoostEnabledEnables the processor to run above its base operating frequency
Intel Hyper-ThreadingEnabledEnables multiple threads to run on each core, improving parallelization of computations performed
mtu9000Sets the Maximum Transmission Unit (MTU) to 9000 for improved throughput

Hardware Configuration and Tuning Notes

The main part of the hardware configuration was handled by Cisco UCS Mananger (UCSM). It supports creation of "Service Profiles", where in all the tuning parameters are specified with their respective values at the start. These service profiles are then replicated across servers and applied during deployment.

Software Configuration and Tuning - Physical

Spectrum Scale Nodes
Parameter NameValueDescription
maxMBpS10000Specifies an estimate of how many megabytes of data can be transferred per second into or out of a single node
pagepool64GSpecifies the size of the cache on each node
maxblocksize2MSpecifies the maximum file system block size
maxFilesToCache11MSpecifies the number of inodes to cache for recently used files that have been closed
workerThreads1024Controls the maximum number of concurrent file operations at any one instant, as well as the degree of concurrency for flushing dirty data and metadata in the background and for prefetching data and metadata
pagepoolMaxPhysMemPct90Percentage of physical memory that can be assigned to the page pool

Software Configuration and Tuning Notes

The configuration parameters were set using the "mmchconfig" command on one of the nodes in the cluster. The nodes used mostly default tuning parameters. A discussion of Spectrum Scale tuning can be found in the official documentation for the mmchconfig command and on the IBM developerWorks wiki.

Service SLA Notes

There were no opaque services in use.

Storage and Filesystems

Item NoDescriptionData ProtectionStable StorageQty
1Two 480GB Boot SSDs per server, used to store the operating system for each Spectrum Scale node.RAID-1Yes20
2Fourteen 8TB Large Form Factor (LFF) HDD per server node, used as network Shared Drive for Spectrum Scale. The drives were configured on JBOD mode.NoneYes140
Number of Filesystems1
Total Capacity1019 TiB
Filesystem TypeSpectrum Scale File System

Filesystem Creation Notes

A single Spectrum Scale file system was created with: 2 MiB block size for data and metadata, 4 KiB inode size. The file system was spread across all of the Network Shared Disks (NSDs). Each client node mounted the file system. The file system parameters reflect values that might be used in a typical streaming environment. On each node, the operating system was hosted on the xfs filesystem.

Storage and Filesystem Notes

Each UCS S3260 server node in the cluster was populated with 14 8TiB Large Form Factor (LFF) HDDs. All the drives were configured in the JBOD mode. Each UCS S3260 Chassis formed one failure group (FG) in Spectrum Scale File System.

The cluster used a single-tier architecture. The Spectrum Scale nodes performed both file and block level operations. Each node had access to all of the NSDs, so any file operation on a node was translated to a block operation and serviced by the NSD server.

Other Notes (about BOOT SSDs):

Per Server: 2 x 480GiB physical drives, Protection: RAID-1, UsableGiB: 480GiB

Transport Configuration - Physical

Item NoTransport TypeNumber of Ports UsedNotes
140GbE Network44Each S3260 server node connects to the Fabric Interconnect over a 40Gb Link. Thus there are ten 40Gb links to each Fabric Interconnect (configured in active-standby mode). The Cisco UCS Blade chassis connects to each Fabric Interconnect with four 40Gb links, with MTU=9000

Transport Configuration Notes

The two Cisco UCS 6332 fabric interconnects function in HA mode (active-standby) as 40 Gbps Ethernet switches.

Cisco UCS S3260 Server nodes (NSD Servers): Each Cisco UCS S3260 Chassis has two server nodes. Each S3260 server node has an S3260 SIOC with an integrated VIC 1300. This provides 40G connectivity for each server node to each Fabric Interconnect (configured as active-standby).

Cisco UCS B200 M4 Blade servers (NSD Clients): Each of the Cisco UCS B200 M4 blade servers comes with a Cisco UCS Virtual Interface Card 1340. The two port card supports 40 GbE and FCoE. Physically the card connects to the UCS 2304 fabric extenders via internal chassis connections. The eight total ports from the fabric extenders connect to the UCS 6332 fabric interconnects. The 40G links on the B200 M4 server blades were bonded in the operating system to provide enhanced throughput for the NSD clients (the traffic across Fabric Interconnects was through Cisco Nexus 9332PQ switch).

Detailed Description of the ports used:

2 x {Cisco UCS 6332} in active/standby config.

total ports for each 6332 = (10 x S3260) + (4 x Blade chassis) + (4 x Uplinks) = 18 ports per 6332

For Nexus 9332 (upstream switch), 4 ports connected from each 6332. Thus total used ports = 8

Overall, total used ports = (18x2) + 8 = 44

Switches - Physical

Item NoSwitch NameSwitch TypeTotal Port CountUsed Port CountNotes
1Cisco UCS 6332 #140 GbE3218The Cisco UCS 6332 Fabric Interconnect forms the management and communication backbone for the servers.
2Cisco UCS 6332 #240 GbE3218The Cisco UCS 6332 Fabric Interconnect forms the management and communication backbone for the servers.
3Cisco Nexus 933240 GbE328Cisco Nexus 9332PQ used as an upstream Switch

Processing Elements - Physical

Item NoQtyTypeLocationDescriptionProcessing Function
120CPUSpectrum Scale server nodesIntel Xeon CPU E5-2680 v4 @ 2.40GHz 14-coreSpectrum Scale nodes (server nodes)
216CPUSpectrum Scale client nodesIntel Xeon CPU E5-2660 v3 @ 2.60GHz 10-coreSpectrum Scale Client nodes, load generator

Processing Element Notes

Each Spectrum Scale node in the system (client and server nodes) had two physical processors each.

Each processor had multiple cores as mentioned in the table above.

Memory - Physical

DescriptionSize in GiBNumber of InstancesNonvolatileTotal GiB
System memory in each spectrum scale node25618V4608
Grand Total Memory Gibibytes4608

Memory Notes

Spectrum Scale reserves a portion of the physical memory in each node for file data and metadata caching. A portion of the memory is also reserved for buffers used for node to node communication.

Stable Storage

The two fabric interconnects are configured in the active-standby mode providing complete High Availability (HA) to the entire cluster, ensuring complete stability/availability of the cluster in case of link failures. The storage is over 8TB Large Form Factor (LFF) HDDs. IBM Spectrum Scale is used for the file system over the underlying storage, 14 x 8TB Large Form Factor (LFF) HDDs per S3260 server. Stable writes and commit operations in Spectrum Scale are not acknowledged until the NSD server receives an acknowledgment of write completion from the underlying storage system.

Solution Under Test Configuration Notes

The solution under test was a Cisco UCS S3260 with IBM Spectrum Scale cluster, a solution well suited for streaming environments. The NSD server nodes were S3260 servers, and UCS B200 M4 blade servers (fully populated in the blade server chassis) were used as load generators for the benchmark. Each node was connected over a 40Gb link to the two fabric interconnects (configured in HA mode).

Other Solution Notes

The WARMUP_TIME for the benchmark was 600 seconds.

Dataflow

The 5 Cisco UCS S3260 chassis with two server nodes each were used for the storage (NSD servers). These servers were populated with 14 8TB LFF HDDs each. The 8 Cisco UCS B200 M4 blades were the load generators for the benchmark (client nodes). Each load generator had access to the single namespace Spectrum Scale file system. The benchmark accessed a single mount point on each load generator. The data requests to and from disk were serviced by the Spectrum Scale server nodes. All nodes were connected with 40Gb link connectivity across the cluster.

Other Notes

Cisco UCS is a trademark of Cisco Systems Inc. in the USA and/or other countries.

IBM and IBM Spectrum Scale are trademarks of International Business Machines Corp., registered in many jurisdictions worldwide.

Intel and Xeon are trademarks of the Intel Corporation in the U.S. and/or other countries.

Other Report Notes

None


Generated on Wed Mar 13 16:48:49 2019 by SpecReport
Copyright © 2016-2019 Standard Performance Evaluation Corporation