|Network Appliance, Inc. :||FAS3070A|
|SPECsfs97_R1.v3 =||85615 Ops/Sec (Overall Response Time = 1.16 msec)|
|Server Configuration and Availability|
|Vendor ||Network Appliance, Inc. |
|Hardware Available ||November 2006|
|Software Available||November 2006 |
|Date Tested||October 2006 |
|SFS License Number||33 |
|Licensee Locations||Sunnyvale, CA |
|CPU, Memory and Power|
|Model Name ||FAS3070A |
|Processor ||1.8-GHz AMD Opteron(tm) 265 |
|# of Processors ||8 cores, 4 chips, 2 cores/chip |
|Primary Cache ||64 KB I + 64 KB D on chip |
|Secondary Cache ||1 MB (I+D) on chip |
|Other Cache ||N/A |
|Other Hardware ||X3147-R5 NVRAM/cluster interconnect adapter (see notes) |
| Memory Size ||16 GB (8 GB per node) |
|NVRAM Size ||1 GB (512 MB per node) |
|NVRAM Type ||DIMMs on PCI cards |
|NVRAM Description||minimum 3-day battery-backed shelf-life |
|OS Name and Version||Data ONTAP 7.2.1|
|Other Software ||Cluster Option |
|File System ||WAFL |
|NFS version ||3 |
|Buffer Cache Size ||default|
|# NFS Processes ||N/A|
|Fileset Size ||812.3 GB|
|Network Type ||Jumbo Frame Gigabit Ethernet |
|Network Controller Desc. ||integrated 10/100/1000 Ethernet controller|
|Number Networks ||2 (N1,N2) |
|Number Network Controllers||4 (2 per node) |
|Protocol Type ||TCP |
|Switch Type ||Cisco 6509 (N1,N2)|
|Bridge Type ||N/A |
|Hub Type ||N/A |
|Other Network Hardware ||N/A |
|Disk Subsystem and Filesystems|
|Number Disk Controllers ||4 (2 per node) |
|Number of Disks ||224 |
|Number of Filesystems ||2 |
|File System Creation Ops||default|
|File System Config ||7 RAID-DP (Double Parity) groups of 16 disks each |
|Disk Controller ||integrated dual-channel QLogic ISP-2432 FC Controller (4MB RAM standard) |
|# of Controller Type ||4 (2 per node) |
|Number of Disks ||56/56/56/56 |
|Disk Type ||X273B 72GB 15K RPM FC-AL |
|File Systems on Disks ||F1/F1/F2/F2 |
|Special Config Notes ||see notes|
|Load Generator (LG) Configuration|
|Number of Load Generators ||20 |
|Number of Processes per LG||14 |
|Biod Max Read Setting ||2 |
|Biod Max Write Setting ||2 |
|LG Type ||LG1 |
|LG Model ||Supermicro SuperServer 6014H-i2 |
|Number and Type Processors||2 x 3.4-GHz Intel Xeon |
|Memory Size ||2048 MB |
|Operating System ||Red Hat Enterprise Linux AS release 3 (2.4.21-40.ELsmp) |
|Compiler ||cc, used SFS97_R1 Precompiled Binaries |
|Compiler Options ||N/A |
|Network Type ||Integrated Dual Port Intel 82546GB Gigabit Ethernet|
|LG #||LG Type||Network||Target File Systems||Notes|
- NetApp's embedded operating system processes NFS requests from the network layer without any NFS daemons, and uses non-volatile memory to improve performance.
- All standard data protection features, including background RAID and media error scrubbing, software validated RAID checksumming, and double disk failure protection via double parity RAID (RAID-DP) were enabled during the test.
- The tested system was an active-active failover cluster comprised of two nodes joined by a cluster interconnect (integrated on the NVRAM card).
- The cluster option was licensed and enabled.
- Each node had 2 cpus (4 cores, 2 chips, 2 cores/chip).
- Each disk controller had two 4Gbit/s capable FC-AL ports, each connected to an ESH (Electronically Switched Hub) loop (running in 2Gbit/s mode). All four FC-AL ports on each node were active during the test and each controlled 28 disks.
- Each node was the owner of a single disk pool or "aggregate".
- Each aggregate was composed of seven RAID-DP groups, each RAID-DP group was composed of 14 data disks and 2 parity disks.
- Within each aggregate, a flexible volume (utilizing DataONTAP FlexVol (TM) technology) was created to hold the SFS filesystem for that node.
- The F1 filesystem was striped across the disks in the aggregate owned by the first node, using a variable size striping mechanism. The F2 filesystem was striped across the disks in the aggregate owned by the second node, using the same variable size striping mechanism.
- Each node was the owner of one filesystem, but the disks in each aggregate were dual-attached so that, in the event of a fault, they could be controlled by the other node via an alternate loop.
- A separate flexible volume resided on the aggregate of each node held the DataONTAP operating system and system files.
- Half of the processes on each client accessed their target volumes through the N1 network interface of the target filer, the other half accessed their target volumes through the N2 interface of the target filer.
- All network ports were set to use jumbo frames (MTU=9000).
- Server tunings:
- - vol options vol1 no_atime_on 1 -- to disable access time update
- NetApp is a registered trademark and "Data ONTAP", "Network Appliance", "FlexVol", and "WAFL" are trademarks of Network Appliance, Inc. in the United States and other countries.
- All other trademarks belong to their respective owners and should be treated as such.
Generated on Thu Dec 21 12:43:41 2006 by SPEC SFS97 HTML Formatter
Copyright © 1997-2004 Standard Performance Evaluation Corporation
First published at SPEC.org on 08-Nov-2006