| NEC (Test Sponsor: Helmholtz-Zentrum Dresden - Rossendorf) Hemera: GIGABYTE H262-Z61 (AMD EPYC 7702) | SPEChpc 2021_tny_base = 4.28 | 
| SPEChpc 2021_tny_peak = Not Run | 
| hpc2021 License: | 065A | Test Date: | Sep-2021 | 
|---|---|---|---|
| Test Sponsor: | Helmholtz-Zentrum Dresden - Rossendorf | Hardware Availability: | Aug-2019 | 
| Tested by: | Helmholtz-Zentrum Dresden - Rossendorf | Software Availability: | Jul-2021 | 
Benchmark result graphs are available in the PDF report.
| Benchmark | Base | Peak | ||||||||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| Model | Ranks | Thrds/Rnk | Seconds | Ratio | Seconds | Ratio | Seconds | Ratio | Model | Ranks | Thrds/Rnk | Seconds | Ratio | Seconds | Ratio | Seconds | Ratio | |
| SPEChpc 2021_tny_base | 4.28 | |||||||||||||||||
| SPEChpc 2021_tny_peak | Not Run | |||||||||||||||||
| Results appear in the order in which they were run. Bold underlined text indicates a median measurement. | ||||||||||||||||||
| 505.lbm_t | OMP | 32 | 4 | 366 | 6.15 | 366 | 6.15 | |||||||||||
| 513.soma_t | OMP | 32 | 4 | 622 | 5.94 | 627 | 5.90 | |||||||||||
| 518.tealeaf_t | OMP | 32 | 4 | 677 | 2.44 | 680 | 2.43 | |||||||||||
| 519.clvleaf_t | OMP | 32 | 4 | 613 | 2.69 | 613 | 2.69 | |||||||||||
| 521.miniswp_t | OMP | 32 | 4 | 326 | 4.90 | 326 | 4.91 | |||||||||||
| 528.pot3d_t | OMP | 32 | 4 | 875 | 2.43 | 874 | 2.43 | |||||||||||
| 532.sph_exa_t | OMP | 32 | 4 | 336 | 5.80 | 335 | 5.81 | |||||||||||
| 534.hpgmgfv_t | OMP | 32 | 4 | 350 | 3.35 | 351 | 3.35 | |||||||||||
| 535.weather_t | OMP | 32 | 4 | 370 | 8.72 | 369 | 8.73 | |||||||||||
| Hardware Summary | |
|---|---|
| Type of System: | Homogenous Cluster | 
| Compute Node: | Compute Node | 
| Interconnect: | Infiniband (EDR) | 
| Compute Nodes Used: | 1 | 
| Total Chips: | 2 | 
| Total Cores: | 64 | 
| Total Threads: | 64 | 
| Total Memory: | 512 GB | 
| Software Summary | |
|---|---|
| Compiler: | C/C++/Fortran: Version 11.2 of GNU Compilers | 
| MPI Library: | OpenMPI Version 4.0.4 | 
| Other MPI Info: | None | 
| Other Software: | None | 
| Base Parallel Model: | OMP | 
| Base Ranks Run: | 32 | 
| Base Threads Run: | 4 | 
| Peak Parallel Models: | Not Run | 
| Hardware | |
|---|---|
| Number of nodes: | 1 | 
| Uses of the node: | compute | 
| Vendor: | Gigabyte | 
| Model: | H262-Z61 | 
| CPU Name: | AMD EPYC 7702 | 
| CPU(s) orderable: | 1 or 2 chips per node | 
| Chips enabled: | 2 | 
| Cores enabled: | 64 | 
| Cores per chip: | 64 | 
| Threads per core: | 1 | 
| CPU Characteristics: | Max Boost Clock up to 3.35 GHz | 
| CPU MHz: | 2000 | 
| Primary Cache: | 32 KB I + 32 KB D on chip per core | 
| Secondary Cache: | 512 KB I+D on chip per core | 
| L3 Cache: | 256 MB I+D on chip per chip 16 MB shared / 4 cores | 
| Other Cache: | None | 
| Memory: | 512 GB (16 x 32GB 2Rx4 PC4-3200AA-RB2-12-RB0) | 
| Disk Subsystem: | 1 x 500 GB SSD | 
| Other Hardware: | None | 
| Accel Count: | 0 | 
| Adapter: | Mellanox MT4119 | 
| Number of Adapters: | 2 | 
| Slot Type: | PCIe 4.0 16x | 
| Data Rate: | 100 Gb/s | 
| Ports Used: | 2 | 
| Interconnect Type: | EDR Infiniband | 
| Software | |
|---|---|
| Adapter: | Mellanox MT4119 | 
| Adapter Firmware: | 16.26.1040 | 
| Operating System: | CentOS Linux release 7.9.2009 (Core) 3.10.0-1160.6.1.el7.x86_64 | 
| Local File System: | xfs | 
| Shared File System: | GPFS Version 5.0.5.0 6 NSD (vendor: NEC) 5 building blocks (vendor: NetApp): 2x (240 x 8 TB HDD) 1x (180 x 12 TB HDD) 1x (240 x 16 TB HDD) 1x (120 x 16 TB HDD) | 
| System State: | Multi-user, run level 3 | 
| Other Software: | None | 
| Hardware | |
|---|---|
| Vendor: | Mellanox Technologies | 
| Model: | Mellanox SB7790 | 
| Switch Model: | 36 x EDR 100 Gb/s | 
| Number of Switches: | 2 | 
| Number of Ports: | 36 | 
| Data Rate: | 100 Gb/s | 
| Topology: | Mesh (blocking factor: 8:1) | 
| Primary Use: | MPI Traffic, GPFS | 
| Software | 
|---|
The config file option 'submit' was used.
  MPI startup command:
    mpirun --bind-to socket -np $ranks $command
 
Environment variables set by runhpc before the start of the run:
OMP_PLACES = "{0}:128:1"
OMP_PROC_BIND = "true"
 
==============================================================================
 FC  519.clvleaf_t(base) 528.pot3d_t(base) 535.weather_t(base)
------------------------------------------------------------------------------
GNU Fortran (GCC) 11.2.0
Copyright (C) 2021 Free Software Foundation, Inc.
This is free software; see the source for copying conditions.  There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
------------------------------------------------------------------------------
==============================================================================
 CXXC 532.sph_exa_t(base)
------------------------------------------------------------------------------
g++ (GCC) 11.2.0
Copyright (C) 2021 Free Software Foundation, Inc.
This is free software; see the source for copying conditions.  There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
------------------------------------------------------------------------------
==============================================================================
 CC  505.lbm_t(base) 513.soma_t(base) 518.tealeaf_t(base) 521.miniswp_t(base)
      534.hpgmgfv_t(base)
------------------------------------------------------------------------------
gcc (GCC) 11.2.0
Copyright (C) 2021 Free Software Foundation, Inc.
This is free software; see the source for copying conditions.  There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
------------------------------------------------------------------------------
 | 521.miniswp_t: | -DUSE_KBA -DUSE_ACCELDIR | 
| 532.sph_exa_t: | -DSPEC_USE_LT_IN_KERNELS |