SPEC(R) MPIM2007 Summary SGI SGI Altix ICE 8200EX (Intel Xeon X5570, 2.93 GHz) Tue Feb 17 07:25:31 2009 MPI2007 License: 4 Test date: Feb-2009 Test sponsor: SGI Hardware availability: Mar-2009 Tested by: SGI Software availability: Jan-2009 Base Base Base Peak Peak Peak Benchmarks Ranks Run Time Ratio Ranks Run Time Ratio -------------- ------ --------- --------- ------ --------- --------- 104.milc 16 433 3.62 * 104.milc 16 433 3.62 S 107.leslie3d 16 1710 3.05 * 107.leslie3d 16 1706 3.06 S 113.GemsFDTD 16 1276 4.94 S 113.GemsFDTD 16 1277 4.94 * 115.fds4 16 617 3.16 * 115.fds4 16 616 3.17 S 121.pop2 16 946 4.36 * 121.pop2 16 944 4.37 S 122.tachyon 16 1169 2.39 * 122.tachyon 16 1164 2.40 S 126.lammps 16 1140 2.56 * 126.lammps 16 1139 2.56 S 127.wrf2 16 1197 6.51 S 127.wrf2 16 1200 6.50 * 128.GAPgeofem 16 533 3.87 S 128.GAPgeofem 16 533 3.87 * 129.tera_tf 16 987 2.80 * 129.tera_tf 16 987 2.81 S 130.socorro 16 893 4.27 * 130.socorro 16 893 4.27 S 132.zeusmp2 16 965 3.22 S 132.zeusmp2 16 973 3.19 * 137.lu 16 1253 2.93 S 137.lu 16 1257 2.93 * ============================================================================== 104.milc 16 433 3.62 * 107.leslie3d 16 1710 3.05 * 113.GemsFDTD 16 1277 4.94 * 115.fds4 16 617 3.16 * 121.pop2 16 946 4.36 * 122.tachyon 16 1169 2.39 * 126.lammps 16 1140 2.56 * 127.wrf2 16 1200 6.50 * 128.GAPgeofem 16 533 3.87 * 129.tera_tf 16 987 2.80 * 130.socorro 16 893 4.27 * 132.zeusmp2 16 973 3.19 * 137.lu 16 1257 2.93 * SPECmpiM_base2007 3.52 SPECmpiM_peak2007 Not Run BENCHMARK DETAILS ----------------- Type of System: Homogeneous Total Compute Nodes: 2 Total Chips: 4 Total Cores: 16 Total Threads: 32 Total Memory: 96 GB Base Ranks Run: 16 Minimum Peak Ranks: -- Maximum Peak Ranks: -- C Compiler: Intel C Compiler for Linux Version 10.1, Build 20080801 C++ Compiler: Intel C++ Compiler for Linux Version 10.1, Build 20080801 Fortran Compiler: Intel Fortran Compiler for Linux Version 10.1, Build 20080801 Base Pointers: 64-bit Peak Pointers: 64-bit MPI Library: SGI MPT 1.23 Other MPI Info: OFED 1.3.1 Pre-processors: None Other Software: None Node Description: SGI Altix ICE 8200EX Compute Node =================================================== HARDWARE -------- Number of nodes: 2 Uses of the node: compute Vendor: SGI Model: SGI Altix ICE 8200EX (Intel Xeon X5570, 2.93 GHz) CPU Name: Intel Xeon X5570 CPU(s) orderable: 1-2 chips Chips enabled: 2 Cores enabled: 8 Cores per chip: 4 Threads per core: 2 CPU Characteristics: Intel Turbo Boost Technology up to 3.33 GHz, 6.4 GT/s QPI, Hyper-Threading enabled CPU MHz: 2934 Primary Cache: 32 KB I + 32 KB D on chip per core Secondary Cache: 256 KB I+D on chip per core L3 Cache: 8 MB I+D on chip per chip Other Cache: None Memory: 48 GB (12*4GB DDR3-1066 CL7 RDIMMs) Disk Subsystem: None Other Hardware: None Adapter: Mellanox MT26418 ConnectX IB DDR (PCIe x8 Gen2 5 GT/s) Number of Adapters: 1 Slot Type: PCIe x8 Gen2 Data Rate: InfiniBand 4x DDR Ports Used: 2 Interconnect Type: InfiniBand SOFTWARE -------- Adapter: Mellanox MT26418 ConnectX IB DDR (PCIe x8 Gen2 5 GT/s) Adapter Driver: OFED-1.3.1 Adapter Firmware: 2.5.0 Operating System: SUSE Linux Enterprise Server 10 (x86_64) SP2 Kernel 2.6.16.60-0.30-smp Local File System: NFSv3 Shared File System: NFSv3 IPoIB System State: Multi-user, run level 3 Other Software: SGI ProPack 6 for Linux Service Pack 2 Node Description: SGI InfiniteStorage Nexis 2000 NAS ==================================================== HARDWARE -------- Number of nodes: 1 Uses of the node: fileserver Vendor: SGI Model: SGI Altix XE 240 (Intel Xeon 5140, 2.33 GHz) CPU Name: Intel Xeon 5140 CPU(s) orderable: 1-2 chips Chips enabled: 2 Cores enabled: 4 Cores per chip: 2 Threads per core: 1 CPU Characteristics: 1333 MHz FSB CPU MHz: 2328 Primary Cache: 32 KB I + 32 KB D on chip per core Secondary Cache: 4 MB I+D on chip per chip L3 Cache: None Other Cache: None Memory: 24 GB (6*4GB DDR2-400 DIMMS) Disk Subsystem: 7 TB RAID 5 48 x 147 GB SAS (Seagate Cheetah 15000 rpm) Other Hardware: None Adapter: Mellanox MT25208 InfiniHost III Ex (PCIe x8 Gen1 2.5 GT/s) Number of Adapters: 2 Slot Type: PCIe x8 Gen1 Data Rate: InfiniBand 4x DDR Ports Used: 2 Interconnect Type: InfiniBand SOFTWARE -------- Adapter: Mellanox MT25208 InfiniHost III Ex (PCIe x8 Gen1 2.5 GT/s) Adapter Driver: OFED-1.3 Adapter Firmware: 5.3.0 Operating System: SUSE Linux Enterprise Server 10 (x86_64) SP1 Kernel 2.6.16.54-0.2.5-smp Local File System: xfs Shared File System: -- System State: Multi-user, run level 3 Other Software: SGI ProPack 5 for Linux Service Pack 5 Interconnect Description: InfiniBand (MPI) ========================================== HARDWARE -------- Vendor: Mellanox Technologies Model: MT26418 ConnectX Switch Model: Mellanox MT47396 InfiniScale III Number of Switches: 8 Number of Ports: 24 Data Rate: InfiniBand 4x DDR Firmware: 2020001 Topology: Bristle hypercube with express links Primary Use: MPI traffic Interconnect Description: InfiniBand (I/O) ========================================== HARDWARE -------- Vendor: Mellanox Technologies Model: MT26418 ConnectX Switch Model: Mellanox MT47396 InfiniScale-III Number of Switches: 8 Number of Ports: 24 Data Rate: InfiniBand 4x DDR Firmware: 2020001 Topology: Bristle hypercube with express links Primary Use: I/O traffic Submit Notes ------------ The config file option 'submit' was used. General Notes ------------- Software environment: setenv MPI_REQUEST_MAX 65536 Determines the maximum number of nonblocking sends and receives that can simultaneously exist for any single MPI process. MPI generates an error message if this limit (or the default, if not set) is exceeded. Default: 16384 setenv MPI_TYPE_MAX 32768 Determines the maximum number of data types that can simultaneously exist for any single MPI process. MPI generates an error message if this limit (or the default, if not set) is exceeded. Default: 1024 setenv MPI_BUFS_THRESHOLD 1 Determines whether MPT uses per-host or per-process message buffers for communicating with other hosts. Per-host buffers are generally faster but for jobs running across many hosts they can consume a prodigious amount of memory. MPT will use per- host buffers for jobs using up to and including this many hosts and will use per-process buffers for larger host counts. Default: 64 setenv MPI_DSM_DISTRIBUTE Activates NUMA job placement mode. This mode ensures that each MPI process gets a unique CPU and physical memory on the node with which that CPU is associated. Currently, the CPUs are chosen by simply starting at relative CPU 0 and incrementing until all MPI processes have been forked. limit stacksize unlimited Removes limits on the maximum size of the automatically- extended stack region of the current process and each process it creates. PBS Pro batch scheduler (www.altair.com) is used with placement sets to ensure each MPI job is assigned to a topologically compact set of nodes BIOS settings: AMI BIOS version 8.15 Hyper-Threading Technology enabled (default) Intel Turbo Boost Technology enabled (default) Intel Turbo Boost Technology activated in the OS via /etc/init.d/acpid start /etc/init.d/powersaved start powersave -f Base Compiler Invocation ------------------------ C benchmarks: icc C++ benchmarks: 126.lammps: icpc Fortran benchmarks: ifort Benchmarks using both Fortran and C: icc ifort Base Portability Flags ---------------------- 121.pop2: -DSPEC_MPI_CASE_FLAG 127.wrf2: -DSPEC_MPI_CASE_FLAG -DSPEC_MPI_LINUX Base Optimization Flags ----------------------- C benchmarks: -O3 -ipo -xT -no-prec-div C++ benchmarks: 126.lammps: -O3 -ipo -xT -no-prec-div -ansi-alias Fortran benchmarks: -O3 -ipo -xT -no-prec-div Benchmarks using both Fortran and C: -O3 -ipo -xT -no-prec-div Base Other Flags ---------------- C benchmarks: -lmpi C++ benchmarks: 126.lammps: -lmpi Fortran benchmarks: -lmpi Benchmarks using both Fortran and C: -lmpi The flags file that was used to format this result can be browsed at http://www.spec.org/mpi2007/flags/EM64T_Intel101_flags.20080611.html You can also download the XML flags source by saving the following link: http://www.spec.org/mpi2007/flags/EM64T_Intel101_flags.20080611.xml SPEC and SPEC MPI are registered trademarks of the Standard Performance Evaluation Corporation. All other brand and product names appearing in this result are trademarks or registered trademarks of their respective holders. ----------------------------------------------------------------------------- For questions about this result, please contact the tester. For other inquiries, please contact webmaster@spec.org. Copyright 2006-2010 Standard Performance Evaluation Corporation Tested with SPEC MPI2007 v1.1. Report generated on Tue Jul 22 13:35:39 2014 by MPI2007 ASCII formatter v1463. Originally published on 30 March 2009.