SPECsfs97_R1.v3 Result =============================================================================== BlueArc Corporation : Titan 3210 Cluster SPECsfs97_R1.v3 = 359416 Ops/Sec (Overall Response Time = 2.16 msec) =============================================================================== Throughput Response ops/sec msec 35715 0.5 71608 0.8 107518 1.2 143980 1.5 179731 1.9 216111 2.4 252394 2.9 288796 3.5 324745 4.1 359416 5.8 =============================================================================== Server Configuration and Availability Vendor BlueArc Corporation Hardware Available March 2008 Software Available May 2008 Date Tested April 2008 SFS License number 000063 Licensee Location San Jose, CA CPU, Memory and Power Model Name Titan 3210 Cluster Processor AMD Opteron 248 2.2Ghz + FPGAs # of Processors 2 cores, 2 chips, 1 core/chip + 26 FPGAs Primary Cache 128KB(I+D) on chip (per chip) Secondary Cache 1MB(I+D) on chip (per chip) Other Cache 32 GB UPS N/A Other Hardware N/A Memory Size 134 GB (incl. other_cache_size, nvram_size and raid controller cache.) NVRAM Size 8 GB NVRAM Type DIMM NVRAM Description 72 hour battery backed Server Software OS Name and Version SU 5.2 Other Software N/A File System BlueArc Silicon File System with Cluster Name Space (CNS) NFS version 3 Server Tuning Buffer Cache Size N/A # NFS Processes N/A Fileset Size 3432.9 GB Network Subsystem Network Type Integrated Network Controller Desc. 2-port, 10Gbps Ethernet (only one port was used per node) Number Networks 1 (N0) Number Network Controllers 2 Protocol Type TCP Switch Type 1 Force10 S2410 10GigE Bridge Type N/A Hub Type N/A Other Network Hardware N/A Disk Subsystem and Filesystems Number Disk Controllers 2 Number of Disks 640 Number of Filesystems 1 (F1) File System Creation Ops 4KB block size File System Config 20 Individual file system volumes aggregated using CNS to present a single, unified namespace Disk Controller Integrated eight port 4Gbps FC # of Controller Type 2 Number of Disks 320 Disk Type ST373455FC File Systems on Disks F1 Special Config Notes see notes Load Generator (LG) Configuration Number of Load Generators 12 Number of Processes per LG 160 Biod Max Read Setting 2 Biod Max Write Setting 2 LG Type LG0 LG Model White Box, Tyan S2915 motherboard Number and Type Processors Dual Opteron 2218 dual core, 2.6Ghz Memory Size 8 GB Operating System Solaris 10 u4 Compiler SFS97_R1 precompiled binaries Compiler Options N/A Network Type Myricom 10GigE Testbed Configuration LG # LG Type Network Target File Systems Notes ---- ------- ------- ------------------- ----- 1-12 LG0 N0 /r/f1, /r/f2, /r/f3, /r/f4, /r/f5, /r/f6, /r/f7, /r/f8, /r/f9, /r/f10, /r/f11, /r/f12, /r/f13, /r/f14, /r/f15, /r/f16, /r/f17, /r/f18, /r/f19, /r/f20, /r/f1, /r/f2, /r/f3, /r/f4, /r/f5, /r/f6, /r/f7, /r/f8, /r/f9, /r/f10, /r/f11, /r/f12, /r/f13, /r/f14, /r/f15, /r/f16, /r/f17, /r/f18, /r/f19, /r/f20, /r/f1, /r/f2, /r/f3, /r/f4, /r/f5, /r/f6, /r/f7, /r/f8, /r/f9, /r/f10, /r/f11, /r/f12, /r/f13, /r/f14, /r/f15, /r/f16, /r/f17, /r/f18, /r/f19, /r/f20, /r/f1, /r/f2, /r/f3, /r/f4, /r/f5, /r/f6, /r/f7, /r/f8, /r/f9, /r/f10, /r/f11, /r/f12, /r/f13, /r/f14, /r/f15, /r/f16, /r/f17, /r/f18, /r/f19, /r/f20, /r/f1, /r/f2, /r/f3, /r/f4, /r/f5, /r/f6, /r/f7, /r/f8, /r/f9, /r/f10, /r/f11, /r/f12, /r/f13, /r/f14, /r/f15, /r/f16, /r/f17, /r/f18, /r/f19, /r/f20, /r/f1, /r/f2, /r/f3, /r/f4, /r/f5, /r/f6, /r/f7, /r/f8, /r/f9, /r/f10, /r/f11, /r/f12, /r/f13, /r/f14, /r/f15, /r/f16, /r/f17, /r/f18, /r/f19, /r/f20, /r/f1, /r/f2, /r/f3, /r/f4, /r/f5, /r/f6, /r/f7, /r/f8, /r/f9, /r/f10, /r/f11, /r/f12, /r/f13, /r/f14, /r/f15, /r/f16, /r/f17, /r/f18, /r/f19, /r/f20, /r/f1, /r/f2, /r/f3, /r/f4, /r/f5, /r/f6, /r/f7, /r/f8, /r/f9, /r/f10, /r/f11, /r/f12, /r/f13, /r/f14, /r/f15, /r/f16, /r/f17, /r/f18, /r/f19, /r/f20 LG0 =============================================================================== Notes and Tuning <> The tested system was a cluster of two (2) BlueArc Titan 3210 servers connected via Fibre Channel fabric to ten (10) storage arrays. Each array consisted of 1 BlueArc RC16TB (LSI 3992) Dual RAID controller with 64 FC drives. Each Dual RAID controller set has 2GB memory, for a total of 20GB cashe memory across all storage controllers. RAID controller cache memory is included in the 134GB memory size listed above. <> The Titan server had all standard protection services enabled, including RAID, NVRAM logging, and media error scrubbing. <> Titan 3210 uses 13 Field Programmable Gate Arrays (FPGAs) to accelerate processing of network traffic and file system I/O. <> Disk drives used were 73GB, 15,000 RPM, 4GB-FC Seagate ST373455FC drives. <> Disk and FS configuration was 32 "1+1" RAID-1 LUs per RAID Controller pair. Each RAID Controller pair represented one Storage Pool created by striping across the 32 LUs. (striping parameters are fixed and not under user control.) Two file systems were created within each Storage Pool. The twenty (20) file systems were aggregated to a single namespace "/r" using BlueArc's Cluster Namespace (CNS) feature. <> The storage arrays were connected to the Titan server using redundant Brocade 200E FC switches with dual redundant connections to each array. <> The Titan 3210 server cluster is connected to 12 Load Generators via 10GigE (end to end) through a single Force10 S2410 switch. <> For Uniform Access Rule compliance all LG's accessed all cluster namespace objects uniformly across all interfaces as follows: <> - There is 1 network node (i.e., Titan 3210 server cluster): T0 <> - There are 20 physical target file systems (/r/f1…./r/f20) presented as a single cluster name space (F1) with virtual root “/r” accessible to all clients. <> - Each Titan 3110 had a single Virtual Server configured, and each Virtual Server owned ten (10) of the twenty (20) total file systems in the cluster. <> - Each Load Generator (1-12) mounted to each file system target (/r/f*) and cycled through the target file systems /r/f1, /r/f2, /r/f3, etc. in sequence. <> Titan 3210 contains four modules that perform all the storage operations, as follows: NIM3 = Network Interface Module (TCP/IP, UDP handling); FSX1 and FSB3 = File System Modules (NFS and CIFS protocol handling, plus cluster interconnect on FSB3); and SIM3=Storage Interface Module (Disk controller / FC interface) <> Titan 3210 has 57 gigabytes (GB) of memory, cache and NVRAM distributed within the Titan modules as follows: <> - NIM3 - 3.5 GB memory per Titan <> - FSX1 - 2.0 GB memory per Titan <> - FSB3 - 30.5 GB memory per Titan, of which 4.0 is NVRAM and 24.0 is FS metadata cache. The remaining amount is used for buffering data moving to/from the disk drives and/or network. <> - SIM3 - 21 GB memory per Titan, of which 16.0 is "sector" cache used for interface with RAID controllers and disk subsystem. This is the "other cache size" in the Titan as noted above. <> For "stable storage" requirement, the Titan server writes first to battery backed (72 hours) NVRAM internal to the Titan. Data from NVRAM is then written to the the drive arrays as convenient, but always within a few seconds of arrival in NVRAM. <> Server tuning: <> - Disable file read-ahead, "read ahead -- disable" <> - Disable shortname generation for CIFS clients "shortname -g off" <> - Server running in "Native Unix" security mode, "security-mode set unix". <> - Set metadata cache bias to small files: "cache-bias --small-files" <> - Accessed time management was turned off: "fs-accessed-time set off" <> - Jumbo frames were enabled. =============================================================================== Generated on Wed Jun 18 11:48:02 EDT 2008 by SPEC SFS97 ASCII Formatter Copyright (c) 1997-2008 Standard Performance Evaluation Corporation