| MPI2007 license: | 1 | Test date: | Oct-2017 |
|---|---|---|---|
| Test sponsor: | HPE | Hardware Availability: | Jul-2017 |
| Tested by: | HPE | Software Availability: | Nov-2017 |
| Benchmark | Base | Peak | ||||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| Ranks | Seconds | Ratio | Seconds | Ratio | Seconds | Ratio | Ranks | Seconds | Ratio | Seconds | Ratio | Seconds | Ratio | |
| Results appear in the order in which they were run. Bold underlined text indicates a median measurement. | ||||||||||||||
| 104.milc | 640 | 15.4 | 102 | 14.9 | 105 | 14.9 | 105 | |||||||
| 107.leslie3d | 640 | 34.1 | 153 | 33.2 | 157 | 33.4 | 156 | |||||||
| 113.GemsFDTD | 640 | 187 | 33.8 | 186 | 33.8 | 186 | 33.9 | |||||||
| 115.fds4 | 640 | 23.3 | 83.9 | 22.8 | 85.6 | 23.2 | 84.0 | |||||||
| 121.pop2 | 640 | 77.5 | 53.2 | 77.5 | 53.3 | 77.3 | 53.4 | |||||||
| 122.tachyon | 640 | 31.4 | 89.0 | 31.5 | 88.9 | 32.1 | 87.2 | |||||||
| 126.lammps | 640 | 90.3 | 32.3 | 89.6 | 32.5 | 89.7 | 32.5 | |||||||
| 127.wrf2 | 640 | 29.5 | 264 | 30.2 | 258 | 29.6 | 264 | |||||||
| 128.GAPgeofem | 640 | 8.10 | 255 | 8.31 | 249 | 8.28 | 249 | |||||||
| 129.tera_tf | 640 | 22.1 | 125 | 22.5 | 123 | 22.3 | 124 | |||||||
| 130.socorro | 640 | 30.7 | 124 | 31.1 | 123 | 31.8 | 120 | |||||||
| 132.zeusmp2 | 640 | 19.8 | 157 | 19.7 | 158 | 19.7 | 158 | |||||||
| 137.lu | 640 | 19.1 | 192 | 18.9 | 195 | 19.0 | 193 | |||||||
| Hardware Summary | |
|---|---|
| Type of System: | Homogeneous |
| Compute Node: | HPE XA730i Gen10 Server Node |
| Interconnect: | InfiniBand (MPI and I/O) |
| File Server Node: | Lustre FS |
| Total Compute Nodes: | 16 |
| Total Chips: | 32 |
| Total Cores: | 640 |
| Total Threads: | 1280 |
| Total Memory: | 3 TB |
| Base Ranks Run: | 640 |
| Minimum Peak Ranks: | -- |
| Maximum Peak Ranks: | -- |
| Software Summary | |
|---|---|
| C Compiler: | Intel C Composer XE for Linux, Version 18.0.0.128 Build 20170811 |
| C++ Compiler: | Intel C++ Composer XE for Linux, Version 18.0.0.128 Build 20170811 |
| Fortran Compiler: | Intel Fortran Composer XE for Linux, Version 18.0.0.128 Build 20170811 |
| Base Pointers: | 64-bit |
| Peak Pointers: | Not Applicable |
| MPI Library: | HPE Performance Software - Message Passing Interface 2.17 |
| Other MPI Info: | OFED 3.2.2 |
| Pre-processors: | None |
| Other Software: | None |
| Hardware | |
|---|---|
| Number of nodes: | 16 |
| Uses of the node: | compute |
| Vendor: | Hewlett Packard Enterprise |
| Model: | SGI 8600 (Intel Xeon Gold 6148, 2.40 GHz) |
| CPU Name: | Intel Xeon Gold 6148 |
| CPU(s) orderable: | 1-2 chips |
| Chips enabled: | 2 |
| Cores enabled: | 40 |
| Cores per chip: | 20 |
| Threads per core: | 2 |
| CPU Characteristics: | Intel Turbo Boost Technology up to 3.70 GHz |
| CPU MHz: | 2400 |
| Primary Cache: | 32 KB I + 32 KB D on chip per core |
| Secondary Cache: | 1 MB I+D on chip per core |
| L3 Cache: | 27.5 MB I+D on chip per chip |
| Other Cache: | None |
| Memory: | 192 GB (12 x 16 GB 2Rx4 PC4-2666V-R) |
| Disk Subsystem: | None |
| Other Hardware: | None |
| Adapter: | Mellanox MT27700 with ConnectX-4 ASIC |
| Number of Adapters: | 2 |
| Slot Type: | PCIe x16 Gen3 8GT/s |
| Data Rate: | InfiniBand 4X EDR |
| Ports Used: | 1 |
| Interconnect Type: | InfiniBand |
| Software | |
|---|---|
| Adapter: | Mellanox MT27700 with ConnectX-4 ASIC |
| Adapter Driver: | OFED-3.4-2.1.8.0 |
| Adapter Firmware: | 12.18.1000 |
| Operating System: | Red Hat Enterprise Linux Server 7.3 (Maipo), Kernel 3.10.0-514.2.2.el7.x86_64 |
| Local File System: | LFS |
| Shared File System: | LFS |
| System State: | Multi-user, run level 3 |
| Other Software: | SGI Management Center Compute Node 3.5.0, Build 716r171.rhel73-1705051353 |
| Hardware | |
|---|---|
| Number of nodes: | 4 |
| Uses of the node: | fileserver |
| Vendor: | Hewlett Packard Enterprise |
| Model: | Rackable C1104-GP2 (Intel Xeon E5-2690 v3, 2.60 GHz) |
| CPU Name: | Intel Xeon E5-2690 v3 |
| CPU(s) orderable: | 1-2 chips |
| Chips enabled: | 2 |
| Cores enabled: | 24 |
| Cores per chip: | 12 |
| Threads per core: | 1 |
| CPU Characteristics: | Intel Turbo Boost Technology up to 3.50 GHz Hyper-Threading Technology disabled |
| CPU MHz: | 2600 |
| Primary Cache: | 32 KB I + 32 KB D on chip per core |
| Secondary Cache: | 256 KB I+D on chip per core |
| L3 Cache: | 30 MB I+D on chip per chip |
| Other Cache: | None |
| Memory: | 128 GB (8 x 16 GB 2Rx4 PC4-2133P-R) |
| Disk Subsystem: | 684 TB RAID 6 48 x 8+2 2TB 7200 RPM |
| Other Hardware: | None |
| Adapter: | Mellanox MT27700 with ConnectX-4 ASIC |
| Number of Adapters: | 2 |
| Slot Type: | PCIe x16 Gen3 |
| Data Rate: | InfiniBand 4X EDR |
| Ports Used: | 1 |
| Interconnect Type: | InfiniBand |
| Software | |
|---|---|
| Adapter: | Mellanox MT27700 with ConnectX-4 ASIC |
| Adapter Driver: | OFED-3.3-1.0.0.0 |
| Adapter Firmware: | 12.14.2036 |
| Operating System: | Red Hat Enterprise Linux Server 7.3 (Maipo), Kernel 3.10.0-514.2.2.el7.x86_64 |
| Local File System: | ext3 |
| Shared File System: | LFS |
| System State: | Multi-user, run level 3 |
| Other Software: | None |
| Hardware | |
|---|---|
| Vendor: | Mellanox Technologies and SGI |
| Model: | SGI P0002145 |
| Switch Model: | SGI P0002145 |
| Number of Switches: | 2 |
| Number of Ports: | 36 |
| Data Rate: | InfiniBand 4X EDR |
| Firmware: | 11.0350.0394 |
| Topology: | Enhanced Hypercube |
| Primary Use: | MPI and I/O traffic |
src.alt used: 129.tera_tf->add_rank_support src.alt used: 130.socorro->nullify_ptrs
The config file option 'submit' was used.
Software environment: export MPI_REQUEST_MAX=65536 export MPI_TYPE_MAX=32768 export MPI_IB_RAILS=2 export MPI_IB_IMM_UPGRADE=false export MPI_CONNECTIONS_THRESHOLD=0 export MPI_IB_DCIS=2 export MPI_IB_HYPER_LAZY=false ulimit -s unlimited BIOS settings: AMI BIOS version SAED7177, 07/17/2017 Job Placement: Each MPI job was assigned to a topologically compact set of nodes. Additional notes regarding interconnect: The Infiniband network consists of two independent planes, with half the switches in the system allocated to each plane. I/O traffic is restricted to one plane, while MPI traffic can use both planes.
| icc |
| 126.lammps: | icpc |
| ifort |
| icc ifort |
| 121.pop2: | -DSPEC_MPI_CASE_FLAG |
| 127.wrf2: | -DSPEC_MPI_CASE_FLAG -DSPEC_MPI_LINUX |
| 130.socorro: | -assume nostd_intent_in |