SPEC® MPIL2007 Result

Copyright 2006-2010 Standard Performance Evaluation Corporation

Hewlett Packard Enterprise

SGI 8600
(Intel Xeon Gold 6148, 2.40 GHz)

SPECmpiL_peak2007 = Not Run

MPI2007 license: 1 Test date: Oct-2017
Test sponsor: HPE Hardware Availability: Jul-2017
Tested by: HPE Software Availability: Nov-2017
Benchmark results graph

Results Table

Benchmark Base Peak
Ranks Seconds Ratio Seconds Ratio Seconds Ratio Ranks Seconds Ratio Seconds Ratio Seconds Ratio
Results appear in the order in which they were run. Bold underlined text indicates a median measurement.
121.pop2 160 400 9.72 400 9.73 400 9.73
122.tachyon 160 379 5.13 374 5.20 380 5.11
125.RAxML 160 438 6.66 438 6.67 439 6.65
126.lammps 160 376 6.54 379 6.48 376 6.54
128.GAPgeofem 160 407 14.6  406 14.6  406 14.6 
129.tera_tf 160 171 6.42 172 6.39 172 6.39
132.zeusmp2 160 231 9.19 234 9.08 231 9.18
137.lu 160 333 12.6  332 12.6  353 11.9 
142.dmilc 160 250 14.7  250 14.7  250 14.7 
143.dleslie 160 229 13.5  230 13.5  228 13.6 
145.lGemsFDTD 160 422 10.5  420 10.5  420 10.5 
147.l2wrf2 160 723 11.3  725 11.3  722 11.4 
Hardware Summary
Type of System: Homogeneous
Compute Node: HPE XA730i Gen10 Server Node
Interconnect: InfiniBand (MPI and I/O)
File Server Node: Lustre FS
Total Compute Nodes: 4
Total Chips: 8
Total Cores: 160
Total Threads: 320
Total Memory: 768 GB
Base Ranks Run: 160
Minimum Peak Ranks: --
Maximum Peak Ranks: --
Software Summary
C Compiler: Intel C Composer XE for Linux,
Version 18.0.0.128 Build 20170811
C++ Compiler: Intel C++ Composer XE for Linux,
Version 18.0.0.128 Build 20170811
Fortran Compiler: Intel Fortran Composer XE for Linux,
Version 18.0.0.128 Build 20170811
Base Pointers: 64-bit
Peak Pointers: Not Applicable
MPI Library: HPE Performance Software - Message Passing
Interface 2.17
Other MPI Info: OFED 3.2.2
Pre-processors: None
Other Software: None

Node Description: HPE XA730i Gen10 Server Node

Hardware
Number of nodes: 4
Uses of the node: compute
Vendor: Hewlett Packard Enterprise
Model: SGI 8600 (Intel Xeon Gold 6148, 2.40 GHz)
CPU Name: Intel Xeon Gold 6148
CPU(s) orderable: 1-2 chips
Chips enabled: 2
Cores enabled: 40
Cores per chip: 20
Threads per core: 2
CPU Characteristics: Intel Turbo Boost Technology up to 3.70 GHz
CPU MHz: 2400
Primary Cache: 32 KB I + 32 KB D on chip per core
Secondary Cache: 1 MB I+D on chip per core
L3 Cache: 27.5 MB I+D on chip per chip
Other Cache: None
Memory: 192 GB (12 x 16 GB 2Rx4 PC4-2666V-R)
Disk Subsystem: None
Other Hardware: None
Adapter: Mellanox MT27700 with ConnectX-4 ASIC
Number of Adapters: 2
Slot Type: PCIe x16 Gen3 8GT/s
Data Rate: InfiniBand 4X EDR
Ports Used: 1
Interconnect Type: InfiniBand
Software
Adapter: Mellanox MT27700 with ConnectX-4 ASIC
Adapter Driver: OFED-3.4-2.1.8.0
Adapter Firmware: 12.18.1000
Operating System: Red Hat Enterprise Linux Server 7.3 (Maipo),
Kernel 3.10.0-514.2.2.el7.x86_64
Local File System: LFS
Shared File System: LFS
System State: Multi-user, run level 3
Other Software: SGI Management Center Compute Node 3.5.0,
Build 716r171.rhel73-1705051353

Node Description: Lustre FS

Hardware
Number of nodes: 4
Uses of the node: fileserver
Vendor: Hewlett Packard Enterprise
Model: Rackable C1104-GP2 (Intel Xeon E5-2690 v3, 2.60
GHz)
CPU Name: Intel Xeon E5-2690 v3
CPU(s) orderable: 1-2 chips
Chips enabled: 2
Cores enabled: 24
Cores per chip: 12
Threads per core: 1
CPU Characteristics: Intel Turbo Boost Technology up to 3.50 GHz
Hyper-Threading Technology disabled
CPU MHz: 2600
Primary Cache: 32 KB I + 32 KB D on chip per core
Secondary Cache: 256 KB I+D on chip per core
L3 Cache: 30 MB I+D on chip per chip
Other Cache: None
Memory: 128 GB (8 x 16 GB 2Rx4 PC4-2133P-R)
Disk Subsystem: 684 TB RAID 6
48 x 8+2 2TB 7200 RPM
Other Hardware: None
Adapter: Mellanox MT27700 with ConnectX-4 ASIC
Number of Adapters: 2
Slot Type: PCIe x16 Gen3
Data Rate: InfiniBand 4X EDR
Ports Used: 1
Interconnect Type: InfiniBand
Software
Adapter: Mellanox MT27700 with ConnectX-4 ASIC
Adapter Driver: OFED-3.3-1.0.0.0
Adapter Firmware: 12.14.2036
Operating System: Red Hat Enterprise Linux Server 7.3 (Maipo),
Kernel 3.10.0-514.2.2.el7.x86_64
Local File System: ext3
Shared File System: LFS
System State: Multi-user, run level 3
Other Software: None

Interconnect Description: InfiniBand (MPI and I/O)

Hardware
Vendor: Mellanox Technologies and SGI
Model: SGI P0002145
Switch Model: SGI P0002145
Number of Switches: 1
Number of Ports: 36
Data Rate: InfiniBand 4X EDR
Firmware: 11.0350.0394
Topology: Enhanced Hypercube
Primary Use: MPI and I/O traffic

Base Tuning Notes

src.alt used: 143.dleslie->integer_overflow

Submit Notes

The config file option 'submit' was used.

General Notes


 Software environment:
   export MPI_REQUEST_MAX=65536
   export MPI_TYPE_MAX=32768
   export MPI_IB_RAILS=2
   export MPI_IB_IMM_UPGRADE=false
   export MPI_IB_DCIS=2
   export MPI_IB_HYPER_LAZY=false
   export MPI_CONNECTIONS_THRESHOLD=0
   ulimit -s unlimited

 BIOS settings:
   AMI BIOS version SAED7177, 07/17/2017

 Job Placement:
   Each MPI job was assigned to a topologically compact set
   of nodes.

 Additional notes regarding interconnect:
   The Infiniband network consists of two independent planes,
   with half the switches in the system allocated to each plane.
   I/O traffic is restricted to one plane, while MPI traffic can
   use both planes.

Base Compiler Invocation

C benchmarks:

 icc 

C++ benchmarks:

126.lammps:  icpc 

Fortran benchmarks:

 ifort 

Benchmarks using both Fortran and C:

 icc   ifort 

Base Portability Flags

121.pop2:  -DSPEC_MPI_CASE_FLAG 

Base Optimization Flags

C benchmarks:

 -O3   -xCORE-AVX512   -no-prec-div   -ipo 

C++ benchmarks:

126.lammps:  -O3   -xCORE-AVX512   -no-prec-div   -ansi-alias   -ipo 

Fortran benchmarks:

 -O3   -xCORE-AVX512   -no-prec-div   -ipo 

Benchmarks using both Fortran and C:

 -O3   -xCORE-AVX512   -no-prec-div   -ipo 

Base Other Flags

C benchmarks:

 -lmpi 

C++ benchmarks:

126.lammps:  -lmpi 

Fortran benchmarks:

 -lmpi 

Benchmarks using both Fortran and C:

 -lmpi 

The flags file that was used to format this result can be browsed at
http://www.spec.org/mpi2007/flags/HPE_x86_64_Intel18_flags.html.

You can also download the XML flags source by saving the following link:
http://www.spec.org/mpi2007/flags/HPE_x86_64_Intel18_flags.xml.