|Spinnaker Networks :||SpinServer 4100 (5-node Scalable Cluster)|
|SPECsfs97_R1.v3 =||134385 Ops/Sec (Overall Response Time = 1.90 msec)|
|Server Configuration and Availability|
|Vendor ||Spinnaker Networks |
|Hardware Available ||June 2003|
|Software Available||June 2003 |
|Date Tested||June 2003 |
|SFS License Number||84 |
|Licensee Locations||Pittsburgh, PA |
|CPU, Memory and Power|
|Model Name ||SpinServer 4100 (5-node Scalable Cluster) |
|Processor ||2.8 GHz Intel Xeon |
|# of Processors ||10 (2 per node) |
|Primary Cache ||12 K uops I + 8 KB D on-chip |
|Secondary Cache ||512 KB on-chip |
|Other Cache ||N/A |
|UPS ||APC Smart-UPS 1000 (1 per node)|
|Other Hardware ||none |
| Memory Size ||20 GB (4 GB per node) |
|NVRAM Size ||8.5 GB (1.7 GB per node) |
|NVRAM Type ||UPS backed write cache and mirrored local SCSI drives |
|NVRAM Description||see notes |
|OS Name and Version||SpinFS 2.1|
|Other Software ||none |
|File System ||SpinFS |
|NFS version ||3 |
|Buffer Cache Size ||default|
|# NFS Processes ||N/A|
|Fileset Size ||1276.5 GB|
|Network Type ||Jumbo Frame Gigabit Ethernet |
|Network Controller Desc. ||1316-0014-01 (PCI Adapter)|
|Number Networks ||2 (N1-Client, N2-Cluster) |
|Number Network Controllers||15 (1 Client and 2 Clusters per node) |
|Protocol Type ||TCP |
|Switch Type ||Extreme Summit 7i|
|Bridge Type ||N/A |
|Hub Type ||N/A |
|Other Network Hardware ||N/A |
|Disk Subsystem and Filesystems|
|Number Disk Controllers ||5 (1 per node) |
|Number of Disks ||240 (48 per node) |
|Number of Filesystems ||1 namespace (see notes) |
|File System Creation Ops||default|
|File System Config ||45 RAID-Groups of 5 disks each |
|Disk Controller ||1316-0015-01 (2Gb FC-AL Adapter) |
|# of Controller Type ||5 (dual-ported) |
|Number of Disks ||240 |
|Disk Type ||1320-0008-01 (146GB, 10K RPM) |
|File Systems on Disks ||F1 |
|Special Config Notes ||see notes|
|Load Generator (LG) Configuration|
|Number of Load Generators ||15 |
|Number of Processes per LG||45 |
|Biod Max Read Setting ||2 |
|Biod Max Write Setting ||2 |
|LG Type ||LG1 |
|LG Model ||Dell PowerEdge 1550 |
|Number and Type Processors||2x1.0-GHz Pentium III |
|Memory Size ||1024 MB |
|Operating System ||Linux 2.4.9-31smp (Red Hat 7.2) |
|Compiler ||gcc 2.96 |
|Compiler Options ||-O -DNO_T_TYPES -DUSE_INTTYPES |
|Network Type ||3COM 3C996-T GigE, MTU=9000|
|LG #||LG Type||Network||Target File Systems||Notes|
- SpinFS allows up to 512 SpinServers with automatic failover to work together in a single namespace.
- The system under test was a 5-node SpinServer 4100 cluster with fifteen 16-disk SpinStor 200 Arrays (5 Raid + 10 JBOD).
- Both FC ports of each SpinServer 4100 were connected to the disk subsystem for performance and availability.
- Each RAID array had two JBOD expansion arrays.
- The single namespace, F1, was comprised of 45 RAID-5 sets. Fifteen additional disks were configured as hot spares.
- Each RAID set had a strip size of 32 KB.
- Server Tunings:
- All network ports were set to use jumbo frames (MTU=9000).
- All scheduled SpinShot jobs were disabled for reproducibility
- Cluster Details:
- Each server had access to the client network (N1) for communication with all Load Generators.
- Each server had access to the cluster network (N2) for communication with the other servers.
- One Storage Pool was created behind each server. One VFS was created per Storage Pool.
- Each VFS was mapped to a subdirectory of the global namespace under root (/vfs1, /vfs2, /vfs3, /vfs4, /vfs5).
- For UAR compliance, the client processes uniformly mounted 5 different VFS objects from the single namespace.
- server1:/vfs1, server1:/vfs2, server1:/vfs3, server1:/vfs4, server1:/vfs5
- server2:/vfs1, server2:/vfs2, server2:/vfs3, server2:/vfs4, server2:/vfs5
- server3:/vfs1, server3:/vfs2, server3:/vfs3, server3:/vfs4, server3:/vfs5
- server4:/vfs1, server4:/vfs2, server4:/vfs3, server4:/vfs4, server4:/vfs5
- server5:/vfs1, server5:/vfs2, server5:/vfs3, server5:/vfs4, server5:/vfs5
- This mounting pattern assured that 1/5 of processes access data local to the server they mounted through.
- It also assured that 4/5 of processes cross the cluster to access data uniformly behind the remaining servers.
- Each SpinStor 200 RAID Array utilized dual RAID Controllers, each with 256 MB of battery-backed RAM.
- The RAID Controller was configured with a Write-Back cache that switches to Write-Through mode when
- the charge level of its battery is below 72 hours.
- The SpinServer 4100 has 1.7 Gbytes of UPS backed cache that will survive a power failure or an operating
- system crash. In the event of a power failure or low battery condition, the cache is written to
- mirrored local SCSI disks. Recovery software restores the cache when power returns. This RAM is not used
- by the operating system as general purpose memory but as NVRAM that is dedicated for caching disk reads
- and writes. The UPS is guaranteed to have sufficient energy after a battery low condition to flush the
- entire contents of the 1.7 GBytes of cache to local drives.
- Spinnaker Networks, SpinServer, SpinStor, and SpinFS are trademarks of Spinnaker Networks in the United States.
- All other trademarks belong to their respective owners and should be treated as such.
Generated on Wed Jul 23 11:40:09 2003 by SPEC SFS97 HTML Formatter
Copyright © 1997-2002 Standard Performance Evaluation Corporation
First published at SPEC.org on 24-Jun-2003