EMC Corporation | : | EMC VNX VG8 Gateway/EMC VNX5700, 5 X-Blades (including 1 stdby) |
SPECsfs2008_nfs.v3 | = | 497623 Ops/Sec (Overall Response Time = 0.96 msec) |
|
Tested By | EMC Corporation |
---|---|
Product Name | EMC VNX VG8 Gateway/EMC VNX5700, 5 X-Blades (including 1 stdby) |
Hardware Available | February 2011 |
Software Available | February 2011 |
Date Tested | February 2011 |
SFS License Number | 47 |
Licensee Locations | Hopkinton, MA |
The EMC VNX VG8 Gateway allows you to reach new heights for performance, availability, scalability, and flexibility as it delivers NAS simplicity and efficiency to EMC Symmetrix, EMC VNX and EMC CLARiiON block arrays. The configuration tested here consists of an EMC VNX VG8 with four active X-Blades and one standby X-Blade connected to four EMC VNX5700 block arrays.
Item No | Qty | Type | Vendor | Model/Name | Description |
---|---|---|---|---|---|
1 | 1 | Enclosure | EMC | VG8-DME0 | VG8 Gateway X-Blade Enclosure with Dual X-Blades included |
2 | 2 | Enclosure | EMC | VG8-DMEX | VG8 Gateway Empty X-Blade Enclosure |
3 | 3 | X-Blade | EMC | VG8-DM | VG8 Gateway Single X-Blade |
4 | 10 | SLIC | EMC | NS-MXG20-A | 10GbE Optical Dual Port |
5 | 5 | SLIC | EMC | NS-M8GF-A | 4 FC 8Gbit Ports |
6 | 1 | Software | EMC | UNIF-VG8 | Unisphere for File on VG8 |
7 | 4 | Software | EMC | VG8-BASE-L | VG8 Base File License (include CIFS and FTP) |
8 | 4 | Software | EMC | VG8-ADV-L | VG8 Adv File License (include NFS, MPFS, and pNFS) |
9 | 1 | Control Station | EMC | NS-CSB | Gateway Control Station Series B |
10 | 1 | Control Station | EMC | NS-CSB2 | Gateway Control Station Series B - Secondary (standby) |
11 | 4 | Enclosure | EMC | VNX5700SPE | VNX5700 SPE 4x6Gbps SAS BE - EMC rack |
12 | 4 | Enclosure | EMC | VNXSPS1KW | VNX57/75 1.2KW SPS 15/25 drv vault DAE - EMC rack |
13 | 4 | Enclosure | EMC | VNX6GSDAE15P | VNX57/75 15 x 3.5" 6Gbps SAS Primary DAE - EMC rack |
14 | 32 | Enclosure | EMC | VNX6GSDAE15 | VNX 15x3.5" 6Gbps SAS expansion DAE - EMC rack |
15 | 4 | Disk | EMC | V-VX-VS1530 | 3.5" 300GB SAS 15K vault pack for 6GSDAE (4 drives) |
16 | 2 | Disk | EMC | VX-VS15-300 | 3.5" 15K 300GB 520BPS 6GB SAS |
17 | 436 | Disk | EMC | VX-VS6F-200 | 3.5" 200GB 6Gbps SAS flash drive |
18 | 4 | SLIC | EMC | VSPM8GFFEA | VNX 4 port 8G FC IO Module Pair |
19 | 4 | Software | EMC | UNIB-V57 | Unisphere for Block for a EMC VNX5700 |
20 | 4 | Software | EMC | VNXOE-57 | VNX OE R31 License Model for EMC VNX5700 Block |
21 | 1 | Switch | EMC | DS-5300B-8G | DS-5300B 48/80 ports 8Gbps Fibre Channel base switch |
22 | 2 | Switch | EMC | DS5300B8G16PU | DS-5300B 8Gbps Fibre Channel 16 port upgrade kit |
23 | 1 | Switch | Cisco | Nexus 5020 | Cisco Nexus 10GbE 48-port IP Switch |
OS Name and Version | VNX File OE 7.0 |
---|---|
Other Software | EMC VNX Control Station Linux 2.6.18-128.1.1.6006.EMC |
Filesystem Software | VNX UxFS File System |
Name | Value | Description |
---|---|---|
ufs updateAccTime | 0 | Disable access time updates |
file asyncThresholdPercentage | 80 | Total cached dirty blocks for NFSv3 async writes |
file dnlcNents | 16777213 | Directory name lookup cache size |
Non-default params were used on the EMC VNX5700 block arrays:
Description | Number of Disks | Usable Size |
---|---|---|
3.5" 200GB 6Gbps SAS flash | 436 | 74.7 TB |
3.5" 300GB SAS 15K RPM drive | 21 | 5.3 TB |
Total | 457 | 80.0 TB |
Number of Filesystems | 8 |
---|---|
Total Exported Capacity | 60243 GiB |
Filesystem Type | UxFS |
Filesystem Creation Options | "server_mount server_x -o noprefetch" - disable prefetching for the mounted file system |
Filesystem Config | Each file system is striped (256 KB element size), across 42 LUNs, fs0 through fs7. Two file systems were mounted and exported from each X-Blade. The two file systems mounted on a given X-Blade reside on a EMC VNX5700 storage array. |
Fileset Size | 57400.6 GB |
420 of the flash drives are for data, 16 are for hot spares. The 436 flash data drives were divided equally among the 4 EMC VNX5700 block arrays, each EMC VNX5700 block array having 109 flash drives installed, 105 data drives and 4 hot spares. The flash data drives were configured as 5 drive RAID groups, for a total of 21 RAID groups per EMC VNX5700. Each RAID group hosted 4 4+1 RAID5 LUNs for a total of 84 LUNs per EMC VNX5700, 42 LUNs owned by SPA and 42 owned by SPB.
The file systems were created in a manner that used the entire usable capacity (after RAID implementation) of the flash data drives. After completion of the benchmark, the file system was 98% full. The benchmark required the number of flash drives be installed in order to attain successful completion.
The 21 SAS drives were divided among the 4 EMC VNX5700 block arrays. The primary EMC VNX5700 block array had 6 SAS drives installed, 5 configured 4+1 RAID5, 1 hot spare. This system hosted the VNX control volumes for all the X-Blades on the 4+1 RAID5 RAID group. The remaining 3 EMC VNX5700 block arrays were configured with 5 SAS drives, 4 drives for the VNX vault, 1 hot spare.
Each client mounted 2 file systems per X-Blade through each network interface of the VNX. Each client mounted a total of 8 file systems.
Item No | Network Type | Number of Ports Used | Notes |
---|---|---|---|
1 | Jumbo 10GbE | 8 | There are 2 10GbE network interfaces in use per active X-Blade, 1 port per 10GbE SLIC. |
All 10GbE network interfaces were connected to a Cisco Nexus 5020 switch.
An MTU size of 9000 was set for all connections to the switch. Each X-Blade was connected to the network via 2 ports. The LG1 class workload machines were connected with one port.
Item No | Qty | Type | Description | Processing Function |
---|---|---|---|---|
1 | 4 | CPU | Single socket Intel six core Westmere (Xeon X5660) 2.8 GHz VG8 with QPI speed 6400 MHz for each X-Blade server. 1 chip active for the workload. (1 Processor in the standby X-Blade is not in the quantity.) | NFS protocol, UxFS file system |
Each X-Blade has one physical processor. 4 active X-Blades for a total of 4 physical processors. The 2 control stations listed in the BOM contain processors that are not counted for in the list of processors here. The control stations are for management only. There is no function of the control station that is in the workload's data path. Also note that there are 2 control stations solely for high availability. Two control stations are not required to manage the EMC VNX VG8 Gateway.
Description | Size in GB | Number of Instances | Total GB | Nonvolatile |
---|---|---|---|---|
Each X-Blade main memory. (24 GB in the standby X-Blade not in the quantity) | 24 | 4 | 96 | V |
EMC VNX5700 storage array battery backed memory. 18 GB per EMC VNX5700 SP. The EMC VNX5700 cache was configured with 10GB mirrored write cache. 512 MB read cache was configured per SP. The balance of the memory is reserved for the Operating Environment (OE R31) | 18 | 8 | 144 | NV |
Grand Total Memory Gigabytes | 240 |
The EMC VNX5700 write cache is backed with sufficient battery power to safely destage all the cached data to the EMC VNX5700 vault drives before shutting down the SP's in the event of a power failure in an orderly fashion.
8 NFS file systems were used. Each file system was striped over 42 LUNs. Each VG8 X-Blade had 2 Fibre Channel connections to the DS-5300B FC switch. The remaining 2 Fibre Channel ports of the VG8 X-Blade were not connected to the FC switch. Each EMC VNX5700 storage array had 8 Fibre Channel connections, 4 per SP. In this configuration, NFS stable write and commit operations are not acknowledged until after the EMC VNX5700 storage array has acknowledged that the related data has been stored in stable storage (i.e. battery backed memory or disk).
The system under test consisted of 4 active VNX VG8 Gateway X-Blades attached to 4 EMC VNX5700 Storage Arrays each with 8 FC links using a DS-5300B Fibre Channel switch. The X-Blades were running VNX File OE 7.0. 2 10GbE Ethernet ports per X-Blade were connected to the network.
Failover is supported by an additional VNX X-Blade that operates in standby mode. In the event of any of the 4 active X-Blades failing, the standby unit takes over the function of the failed unit. The standby X-Blade does not contribute to the performance of the system and it is not included in the active components listed above.
Item No | Qty | Vendor | Model/Name | Description |
---|---|---|---|---|
1 | 34 | Dell | Dell PowerEdge R610 | Dell server with 12 GB RAM and the Linux operating system |
LG Type Name | LG1 |
---|---|
BOM Item # | 1 |
Processor Name | Intel(R) Xeon(TM) E5530 |
Processor Speed | 2.4 GHz |
Number of Processors (chips) | 1 |
Number of Cores/Chip | 4 |
Memory Size | 12 GB |
Operating System | CentOS 5.4 Linux 2.6.18-164.el5 |
Network Type | 1 x Intel X520 10GbE |
Network Attached Storage Type | NFS V3 |
---|---|
Number of Load Generators | 34 |
Number of Processes per LG | 64 |
Biod Max Read Setting | 8 |
Biod Max Write Setting | 8 |
Block Size | AUTO |
LG No | LG Type | Network | Target Filesystems | Notes |
---|---|---|---|---|
1..34 | LG1 | 1 | /fs0, ..., /fs7 | N/A |
All file systems were mounted on all clients, which were connected to the same physical and logical network.
Each client has all file systems mounted from each active X-Blade.
Failover is supported by an additional VNX X-Blade that operates in standby mode. In the event of the X-Blade failure, the standby unit takes over the function of any failed unit. The standby X-Blade does not contribute to the performance of the system and it is not included in the components listed above.
The EMC VNX5700's were configured with 18 GB of memory per SP. The memory is backed up with sufficient battery power to safely destage all cached data onto the vault drives before shutting down the SP's in an orderly manner in the event of a power failure.
Generated on Tue Feb 22 17:10:32 2011 by SPECsfs2008 HTML Formatter
Copyright © 1997-2008 Standard Performance Evaluation Corporation
First published at SPEC.org on 22-Feb-2011