SPEC CPU(R)2017 Integer Speed Result Hewlett Packard Enterprise ProLiant DL385 Gen10 Plus (2.35 GHz, AMD EPYC 7452) Test Sponsor: HPE CPU2017 License: 3 Test date: Mar-2020 Test sponsor: HPE Hardware availability: Dec-2019 Tested by: HPE Software availability: Aug-2019 Base Base Base Peak Peak Peak Benchmarks Threads Run Time Ratio Threads Run Time Ratio --------------- ------- --------- --------- ------- --------- --------- 600.perlbench_s 64 380 4.68 S 1 355 5.00 S 600.perlbench_s 64 378 4.69 * 1 370 4.80 * 600.perlbench_s 64 376 4.72 S 1 372 4.78 S 602.gcc_s 64 418 9.52 S 1 425 9.38 S 602.gcc_s 64 417 9.55 S 1 423 9.42 * 602.gcc_s 64 418 9.52 * 1 418 9.52 S 605.mcf_s 64 323 14.6 * 1 304 15.5 S 605.mcf_s 64 323 14.6 S 1 304 15.5 * 605.mcf_s 64 323 14.6 S 1 303 15.6 S 620.omnetpp_s 64 331 4.93 * 1 327 4.98 S 620.omnetpp_s 64 324 5.03 S 1 323 5.04 S 620.omnetpp_s 64 331 4.92 S 1 325 5.03 * 623.xalancbmk_s 64 156 9.10 * 1 146 9.72 S 623.xalancbmk_s 64 158 8.94 S 1 144 9.82 * 623.xalancbmk_s 64 155 9.14 S 1 144 9.83 S 625.x264_s 64 143 12.4 * 1 141 12.5 S 625.x264_s 64 143 12.4 S 1 141 12.5 * 625.x264_s 64 143 12.3 S 1 139 12.7 S 631.deepsjeng_s 64 297 4.82 S 1 296 4.84 S 631.deepsjeng_s 64 300 4.77 S 1 295 4.85 * 631.deepsjeng_s 64 298 4.82 * 1 291 4.93 S 641.leela_s 64 416 4.10 S 1 428 3.99 * 641.leela_s 64 417 4.09 S 1 428 3.98 S 641.leela_s 64 416 4.10 * 1 425 4.01 S 648.exchange2_s 64 183 16.1 * 1 184 15.9 * 648.exchange2_s 64 183 16.1 S 1 185 15.9 S 648.exchange2_s 64 182 16.1 S 1 181 16.2 S 657.xz_s 64 300 20.6 S 64 301 20.5 S 657.xz_s 64 301 20.5 S 64 300 20.6 * 657.xz_s 64 300 20.6 * 64 298 20.7 S ================================================================================= 600.perlbench_s 64 378 4.69 * 1 370 4.80 * 602.gcc_s 64 418 9.52 * 1 423 9.42 * 605.mcf_s 64 323 14.6 * 1 304 15.5 * 620.omnetpp_s 64 331 4.93 * 1 325 5.03 * 623.xalancbmk_s 64 156 9.10 * 1 144 9.82 * 625.x264_s 64 143 12.4 * 1 141 12.5 * 631.deepsjeng_s 64 298 4.82 * 1 295 4.85 * 641.leela_s 64 416 4.10 * 1 428 3.99 * 648.exchange2_s 64 183 16.1 * 1 184 15.9 * 657.xz_s 64 300 20.6 * 64 300 20.6 * SPECspeed(R)2017_int_base 8.66 SPECspeed(R)2017_int_peak 8.79 HARDWARE -------- CPU Name: AMD EPYC 7452 Max MHz: 3350 Nominal: 2350 Enabled: 64 cores, 2 chips Orderable: 1, 2 chips Cache L1: 32 KB I + 32 KB D on chip per core L2: 512 KB I+D on chip per core L3: 128 MB I+D on chip per chip, 16 MB shared / 4 cores Other: None Memory: 1 TB (16 x 64 GB 2Rx4 PC4-3200AA-R) Storage: 1 x 800 GB SAS SSD, RAID 0 Other: None SOFTWARE -------- OS: SUSE Linux Enterprise Server 15 (x86_64) SP1 Kernel 4.12.14-195-default Compiler: C/C++/Fortran: Version 2.0.0 of AOCC Parallel: Yes Firmware: HPE BIOS Version A42 12/12/2019 released Dec-2019 File System: btrfs System State: Run level 3 (multi-user) Base Pointers: 64-bit Peak Pointers: 32/64-bit Other: jemalloc: jemalloc memory allocator library v5.2.0 Power Management: BIOS set to prefer performance at the cost of additional power usage Compiler Notes -------------- The AMD64 AOCC Compiler Suite is available at http://developer.amd.com/amd-aocc/ Submit Notes ------------ The config file option 'submit' was used. 'numactl' was used to bind copies to the cores. See the configuration file for details. Operating System Notes ---------------------- 'ulimit -s unlimited' was used to set environment stack size 'ulimit -l 2097152' was used to set environment locked pages in memory limit runcpu command invoked through numactl i.e.: numactl --interleave=all runcpu Set dirty_ratio=8 to limit dirty cache to 8% of memory Set swappiness=1 to swap only if necessary Set zone_reclaim_mode=1 to free local node memory and avoid remote memory sync then drop_caches=3 to reset caches before invoking runcpu dirty_ratio, swappiness, zone_reclaim_mode and drop_caches were all set using privileged echo (e.g. echo 1 > /proc/sys/vm/swappiness). Transparent huge pages set to 'always' for this run (OS default) Environment Variables Notes --------------------------- Environment variables set by runcpu before the start of the run: GOMP_CPU_AFFINITY = "0-63" LD_LIBRARY_PATH = "/home/cpu2017-bbn/amd_speed_aocc200_rome_C_lib/64;/home/cpu2017-bbn/amd _speed_aocc200_rome_C_lib/32:" MALLOC_CONF = "retain:true" OMP_DYNAMIC = "false" OMP_SCHEDULE = "static" OMP_STACKSIZE = "128M" OMP_THREAD_LIMIT = "64" Environment variables set by runcpu during the 600.perlbench_s peak run: GOMP_CPU_AFFINITY = "0" Environment variables set by runcpu during the 602.gcc_s peak run: GOMP_CPU_AFFINITY = "0" Environment variables set by runcpu during the 605.mcf_s peak run: GOMP_CPU_AFFINITY = "0" Environment variables set by runcpu during the 620.omnetpp_s peak run: GOMP_CPU_AFFINITY = "0" Environment variables set by runcpu during the 623.xalancbmk_s peak run: GOMP_CPU_AFFINITY = "0" OMP_STACKSIZE = "128M" Environment variables set by runcpu during the 625.x264_s peak run: GOMP_CPU_AFFINITY = "0" Environment variables set by runcpu during the 631.deepsjeng_s peak run: GOMP_CPU_AFFINITY = "0" Environment variables set by runcpu during the 641.leela_s peak run: GOMP_CPU_AFFINITY = "0" Environment variables set by runcpu during the 648.exchange2_s peak run: GOMP_CPU_AFFINITY = "0" Environment variables set by runcpu during the 657.xz_s peak run: GOMP_CPU_AFFINITY = "0-63" General Notes ------------- Binaries were compiled on a system with 2x AMD EPYC 7601 CPU + 512GB Memory using Fedora 26 NA: The test sponsor attests, as of date of publication, that CVE-2017-5754 (Meltdown) is mitigated in the system as tested and documented. Yes: The test sponsor attests, as of date of publication, that CVE-2017-5753 (Spectre variant 1) is mitigated in the system as tested and documented. Yes: The test sponsor attests, as of date of publication, that CVE-2017-5715 (Spectre variant 2) is mitigated in the system as tested and documented. jemalloc: configured and built with GCC v9.1.0 in Ubuntu 19.04 with -O3 -znver2 -flto jemalloc 5.2.0 is available here: https://github.com/jemalloc/jemalloc/releases/download/5.2.0/jemalloc-5.2.0.tar.bz2 Platform Notes -------------- BIOS Configuration Thermal Configuration set to Maximum Cooling SMT Mode set to Disabled Determinism Control set to Manual Performance Determinism set to Power Deterministic Minimum Processor Idle Power Core C-State set to C6 State Memory Patrol Scrubbing set to Disabled Workload Profile set to General Peak Frequency Compute NUMA memory domains per socket set to Four memory domains per socket C-State Efficiency mode set to Disabled Sysinfo program /home/cpu2017-bbn/bin/sysinfo Rev: r6365 of 2019-08-21 295195f888a3d7edb1e6e46a485a0011 running on linux-30t0 Thu Feb 14 19:52:17 2019 SUT (System Under Test) info as seen by some common utilities. For more information on this section, see https://www.spec.org/cpu2017/Docs/config.html#sysinfo From /proc/cpuinfo model name : AMD EPYC 7452 32-Core Processor 2 "physical id"s (chips) 64 "processors" cores, siblings (Caution: counting these is hw and system dependent. The following excerpts from /proc/cpuinfo might not be reliable. Use with caution.) cpu cores : 32 siblings : 32 physical 0: cores 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 physical 1: cores 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 From lscpu: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian Address sizes: 48 bits physical, 48 bits virtual CPU(s): 64 On-line CPU(s) list: 0-63 Thread(s) per core: 1 Core(s) per socket: 32 Socket(s): 2 NUMA node(s): 8 Vendor ID: AuthenticAMD CPU family: 23 Model: 49 Model name: AMD EPYC 7452 32-Core Processor Stepping: 0 CPU MHz: 2350.000 CPU max MHz: 2350.0000 CPU min MHz: 1500.0000 BogoMIPS: 4690.83 Virtualization: AMD-V L1d cache: 32K L1i cache: 32K L2 cache: 512K L3 cache: 16384K NUMA node0 CPU(s): 0-7 NUMA node1 CPU(s): 8-15 NUMA node2 CPU(s): 16-23 NUMA node3 CPU(s): 24-31 NUMA node4 CPU(s): 32-39 NUMA node5 CPU(s): 40-47 NUMA node6 CPU(s): 48-55 NUMA node7 CPU(s): 56-63 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid extd_apicid aperfmperf pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_l2 mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif umip rdpid overflow_recov succor smca /proc/cpuinfo cache data cache size : 512 KB From numactl --hardware WARNING: a numactl 'node' might or might not correspond to a physical chip. available: 8 nodes (0-7) node 0 cpus: 0 1 2 3 4 5 6 7 node 0 size: 128711 MB node 0 free: 128469 MB node 1 cpus: 8 9 10 11 12 13 14 15 node 1 size: 129021 MB node 1 free: 128684 MB node 2 cpus: 16 17 18 19 20 21 22 23 node 2 size: 128992 MB node 2 free: 128831 MB node 3 cpus: 24 25 26 27 28 29 30 31 node 3 size: 129009 MB node 3 free: 128820 MB node 4 cpus: 32 33 34 35 36 37 38 39 node 4 size: 129021 MB node 4 free: 128884 MB node 5 cpus: 40 41 42 43 44 45 46 47 node 5 size: 129021 MB node 5 free: 128895 MB node 6 cpus: 48 49 50 51 52 53 54 55 node 6 size: 129021 MB node 6 free: 128898 MB node 7 cpus: 56 57 58 59 60 61 62 63 node 7 size: 129020 MB node 7 free: 128894 MB node distances: node 0 1 2 3 4 5 6 7 0: 10 12 12 12 32 32 32 32 1: 12 10 12 12 32 32 32 32 2: 12 12 10 12 32 32 32 32 3: 12 12 12 10 32 32 32 32 4: 32 32 32 32 10 12 12 12 5: 32 32 32 32 12 10 12 12 6: 32 32 32 32 12 12 10 12 7: 32 32 32 32 12 12 12 10 From /proc/meminfo MemTotal: 1056585288 kB HugePages_Total: 0 Hugepagesize: 2048 kB From /etc/*release* /etc/*version* os-release: NAME="SLES" VERSION="15-SP1" VERSION_ID="15.1" PRETTY_NAME="SUSE Linux Enterprise Server 15 SP1" ID="sles" ID_LIKE="suse" ANSI_COLOR="0;32" CPE_NAME="cpe:/o:suse:sles:15:sp1" uname -a: Linux linux-30t0 4.12.14-195-default #1 SMP Tue May 7 10:55:11 UTC 2019 (8fba516) x86_64 x86_64 x86_64 GNU/Linux Kernel self-reported vulnerability status: CVE-2018-3620 (L1 Terminal Fault): Not affected Microarchitectural Data Sampling: Not affected CVE-2017-5754 (Meltdown): Not affected CVE-2018-3639 (Speculative Store Bypass): Mitigation: Speculative Store Bypass disabled via prctl and seccomp CVE-2017-5753 (Spectre variant 1): Mitigation: __user pointer sanitization CVE-2017-5715 (Spectre variant 2): Mitigation: Full AMD retpoline, IBPB: conditional, IBRS_FW, STIBP: disabled, RSB filling run-level 3 Feb 14 19:51 SPEC is set to: /home/cpu2017-bbn Filesystem Type Size Used Avail Use% Mounted on /dev/sdc2 btrfs 743G 26G 717G 4% /home From /sys/devices/virtual/dmi/id BIOS: HPE A42 12/12/2019 Vendor: HPE Product: ProLiant DL385 Gen10 Plus Product Family: ProLiant Serial: CN79340HC5 Additional information from dmidecode follows. WARNING: Use caution when you interpret this section. The 'dmidecode' program reads system data which is "intended to allow hardware to be accurately determined", but the intent may not be met, as there are frequent changes to hardware, firmware, and the "DMTF SMBIOS" standard. Memory: 16x Micron 36ASF8G72PZ-3G2B2 64 GB 2 rank 3200 16x UNKNOWN NOT AVAILABLE (End of data from sysinfo program) Compiler Version Notes ---------------------- ============================================================================== C | 600.perlbench_s(base, peak) 602.gcc_s(base, peak) 605.mcf_s(base, | peak) 625.x264_s(base, peak) 657.xz_s(base, peak) ------------------------------------------------------------------------------ AOCC.LLVM.2.0.0.B191.2019_07_19 clang version 8.0.0 (CLANG: Jenkins AOCC_2_0_0-Build#191) (based on LLVM AOCC.LLVM.2.0.0.B191.2019_07_19) Target: x86_64-unknown-linux-gnu Thread model: posix InstalledDir: /sppo/dev/compilers/aocc-compiler-2.0.0/bin ------------------------------------------------------------------------------ ============================================================================== C++ | 623.xalancbmk_s(peak) ------------------------------------------------------------------------------ AOCC.LLVM.2.0.0.B191.2019_07_19 clang version 8.0.0 (CLANG: Jenkins AOCC_2_0_0-Build#191) (based on LLVM AOCC.LLVM.2.0.0.B191.2019_07_19) Target: i386-unknown-linux-gnu Thread model: posix InstalledDir: /sppo/dev/compilers/aocc-compiler-2.0.0/bin ------------------------------------------------------------------------------ ============================================================================== C++ | 620.omnetpp_s(base, peak) 623.xalancbmk_s(base) | 631.deepsjeng_s(base, peak) 641.leela_s(base, peak) ------------------------------------------------------------------------------ AOCC.LLVM.2.0.0.B191.2019_07_19 clang version 8.0.0 (CLANG: Jenkins AOCC_2_0_0-Build#191) (based on LLVM AOCC.LLVM.2.0.0.B191.2019_07_19) Target: x86_64-unknown-linux-gnu Thread model: posix InstalledDir: /sppo/dev/compilers/aocc-compiler-2.0.0/bin ------------------------------------------------------------------------------ ============================================================================== C++ | 623.xalancbmk_s(peak) ------------------------------------------------------------------------------ AOCC.LLVM.2.0.0.B191.2019_07_19 clang version 8.0.0 (CLANG: Jenkins AOCC_2_0_0-Build#191) (based on LLVM AOCC.LLVM.2.0.0.B191.2019_07_19) Target: i386-unknown-linux-gnu Thread model: posix InstalledDir: /sppo/dev/compilers/aocc-compiler-2.0.0/bin ------------------------------------------------------------------------------ ============================================================================== C++ | 620.omnetpp_s(base, peak) 623.xalancbmk_s(base) | 631.deepsjeng_s(base, peak) 641.leela_s(base, peak) ------------------------------------------------------------------------------ AOCC.LLVM.2.0.0.B191.2019_07_19 clang version 8.0.0 (CLANG: Jenkins AOCC_2_0_0-Build#191) (based on LLVM AOCC.LLVM.2.0.0.B191.2019_07_19) Target: x86_64-unknown-linux-gnu Thread model: posix InstalledDir: /sppo/dev/compilers/aocc-compiler-2.0.0/bin ------------------------------------------------------------------------------ ============================================================================== Fortran | 648.exchange2_s(base, peak) ------------------------------------------------------------------------------ AOCC.LLVM.2.0.0.B191.2019_07_19 clang version 8.0.0 (CLANG: Jenkins AOCC_2_0_0-Build#191) (based on LLVM AOCC.LLVM.2.0.0.B191.2019_07_19) Target: x86_64-unknown-linux-gnu Thread model: posix InstalledDir: /sppo/dev/compilers/aocc-compiler-2.0.0/bin ------------------------------------------------------------------------------ Base Compiler Invocation ------------------------ C benchmarks: clang C++ benchmarks: clang++ Fortran benchmarks: flang Base Portability Flags ---------------------- 600.perlbench_s: -DSPEC_LINUX_X64 -DSPEC_LP64 602.gcc_s: -DSPEC_LP64 605.mcf_s: -DSPEC_LP64 620.omnetpp_s: -DSPEC_LP64 623.xalancbmk_s: -DSPEC_LINUX -DSPEC_LP64 625.x264_s: -DSPEC_LP64 631.deepsjeng_s: -DSPEC_LP64 641.leela_s: -DSPEC_LP64 648.exchange2_s: -DSPEC_LP64 657.xz_s: -DSPEC_LP64 Base Optimization Flags ----------------------- C benchmarks: -flto -Wl,-mllvm -Wl,-function-specialize -Wl,-mllvm -Wl,-region-vectorize -Wl,-mllvm -Wl,-vector-library=LIBMVEC -Wl,-mllvm -Wl,-reduce-array-computations=3 -O3 -ffast-math -march=znver2 -fstruct-layout=3 -mllvm -unroll-threshold=50 -fremap-arrays -mllvm -function-specialize -mllvm -enable-gvn-hoist -mllvm -reduce-array-computations=3 -mllvm -global-vectorize-slp -mllvm -vector-library=LIBMVEC -mllvm -inline-threshold=1000 -flv-function-specialization -z muldefs -DSPEC_OPENMP -fopenmp -DUSE_OPENMP -fopenmp=libomp -lomp -lpthread -ldl -lmvec -lamdlibm -ljemalloc -lflang C++ benchmarks: -flto -Wl,-mllvm -Wl,-function-specialize -Wl,-mllvm -Wl,-region-vectorize -Wl,-mllvm -Wl,-vector-library=LIBMVEC -Wl,-mllvm -Wl,-reduce-array-computations=3 -Wl,-mllvm -Wl,-suppress-fmas -O3 -ffast-math -march=znver2 -mllvm -loop-unswitch-threshold=200000 -mllvm -vector-library=LIBMVEC -mllvm -unroll-threshold=100 -flv-function-specialization -mllvm -enable-partial-unswitch -z muldefs -DSPEC_OPENMP -fopenmp -DUSE_OPENMP -fopenmp=libomp -lomp -lpthread -ldl -lmvec -lamdlibm -ljemalloc -lflang Fortran benchmarks: -flto -Wl,-mllvm -Wl,-function-specialize -Wl,-mllvm -Wl,-region-vectorize -Wl,-mllvm -Wl,-vector-library=LIBMVEC -Wl,-mllvm -Wl,-reduce-array-computations=3 -ffast-math -Wl,-mllvm -Wl,-inline-recursion=4 -Wl,-mllvm -Wl,-lsr-in-nested-loop -Wl,-mllvm -Wl,-enable-iv-split -O3 -march=znver2 -funroll-loops -Mrecursive -mllvm -vector-library=LIBMVEC -z muldefs -mllvm -disable-indvar-simplify -mllvm -unroll-aggressive -mllvm -unroll-threshold=150 -DSPEC_OPENMP -fopenmp -DUSE_OPENMP -fopenmp=libomp -lomp -lpthread -ldl -lmvec -lamdlibm -ljemalloc -lflang Base Other Flags ---------------- C benchmarks: -Wno-return-type C++ benchmarks: -Wno-return-type Fortran benchmarks: -Wno-return-type Peak Compiler Invocation ------------------------ C benchmarks: clang C++ benchmarks: clang++ Fortran benchmarks: flang Peak Portability Flags ---------------------- 600.perlbench_s: -DSPEC_LINUX_X64 -DSPEC_LP64 602.gcc_s: -DSPEC_LP64 605.mcf_s: -DSPEC_LP64 620.omnetpp_s: -DSPEC_LP64 623.xalancbmk_s: -DSPEC_LINUX -D_FILE_OFFSET_BITS=64 625.x264_s: -DSPEC_LP64 631.deepsjeng_s: -DSPEC_LP64 641.leela_s: -DSPEC_LP64 648.exchange2_s: -DSPEC_LP64 657.xz_s: -DSPEC_LP64 Peak Optimization Flags ----------------------- C benchmarks: 600.perlbench_s: -flto -Wl,-mllvm -Wl,-function-specialize -Wl,-mllvm -Wl,-region-vectorize -Wl,-mllvm -Wl,-vector-library=LIBMVEC -Wl,-mllvm -Wl,-reduce-array-computations=3 -fprofile-instr-generate(pass 1) -fprofile-instr-use(pass 2) -Ofast -march=znver2 -mno-sse4a -fstruct-layout=5 -mllvm -vectorize-memory-aggressively -mllvm -function-specialize -mllvm -enable-gvn-hoist -mllvm -unroll-threshold=50 -fremap-arrays -mllvm -vector-library=LIBMVEC -mllvm -reduce-array-computations=3 -mllvm -global-vectorize-slp -mllvm -inline-threshold=1000 -flv-function-specialization -DSPEC_OPENMP -fopenmp -DUSE_OPENMP -lmvec -lamdlibm -fopenmp=libomp -lomp -lpthread -ldl -ljemalloc -lflang 602.gcc_s: -flto -Wl,-mllvm -Wl,-function-specialize -Wl,-mllvm -Wl,-region-vectorize -Wl,-mllvm -Wl,-vector-library=LIBMVEC -Wl,-mllvm -Wl,-reduce-array-computations=3 -Ofast -march=znver2 -mno-sse4a -fstruct-layout=5 -mllvm -vectorize-memory-aggressively -mllvm -function-specialize -mllvm -enable-gvn-hoist -mllvm -unroll-threshold=50 -fremap-arrays -mllvm -vector-library=LIBMVEC -mllvm -reduce-array-computations=3 -mllvm -global-vectorize-slp -mllvm -inline-threshold=1000 -flv-function-specialization -z muldefs -DSPEC_OPENMP -fopenmp -DUSE_OPENMP -fgnu89-inline -fopenmp=libomp -lomp -lpthread -ldl -ljemalloc 605.mcf_s: -flto -Wl,-mllvm -Wl,-function-specialize -Wl,-mllvm -Wl,-region-vectorize -Wl,-mllvm -Wl,-vector-library=LIBMVEC -Wl,-mllvm -Wl,-reduce-array-computations=3 -Ofast -march=znver2 -mno-sse4a -fstruct-layout=5 -mllvm -vectorize-memory-aggressively -mllvm -function-specialize -mllvm -enable-gvn-hoist -mllvm -unroll-threshold=50 -fremap-arrays -mllvm -vector-library=LIBMVEC -mllvm -reduce-array-computations=3 -mllvm -global-vectorize-slp -mllvm -inline-threshold=1000 -flv-function-specialization -DSPEC_OPENMP -fopenmp -DUSE_OPENMP -lmvec -lamdlibm -fopenmp=libomp -lomp -lpthread -ldl -ljemalloc -lflang 625.x264_s: Same as 600.perlbench_s 657.xz_s: -flto -Wl,-mllvm -Wl,-function-specialize -Wl,-mllvm -Wl,-region-vectorize -Wl,-mllvm -Wl,-vector-library=LIBMVEC -Wl,-mllvm -Wl,-reduce-array-computations=3 -Ofast -march=znver2 -mno-sse4a -fstruct-layout=5 -mllvm -vectorize-memory-aggressively -mllvm -function-specialize -mllvm -enable-gvn-hoist -mllvm -unroll-threshold=50 -fremap-arrays -mllvm -vector-library=LIBMVEC -mllvm -reduce-array-computations=3 -mllvm -global-vectorize-slp -mllvm -inline-threshold=1000 -flv-function-specialization -DSPEC_OPENMP -fopenmp -DUSE_OPENMP -fopenmp=libomp -lomp -lpthread -ldl -lmvec -lamdlibm -ljemalloc -lflang C++ benchmarks: 620.omnetpp_s: -flto -Wl,-mllvm -Wl,-function-specialize -Wl,-mllvm -Wl,-region-vectorize -Wl,-mllvm -Wl,-vector-library=LIBMVEC -Wl,-mllvm -Wl,-reduce-array-computations=3 -Ofast -march=znver2 -flv-function-specialization -mllvm -unroll-threshold=100 -mllvm -enable-partial-unswitch -mllvm -loop-unswitch-threshold=200000 -mllvm -vector-library=LIBMVEC -mllvm -inline-threshold=1000 -DSPEC_OPENMP -fopenmp -DUSE_OPENMP -fopenmp=libomp -lomp -lpthread -ldl -lmvec -lamdlibm -ljemalloc -lflang 623.xalancbmk_s: -m32 -flto -Wl,-mllvm -Wl,-function-specialize -Wl,-mllvm -Wl,-region-vectorize -Wl,-mllvm -Wl,-vector-library=LIBMVEC -Wl,-mllvm -Wl,-reduce-array-computations=3 -Ofast -march=znver2 -flv-function-specialization -mllvm -unroll-threshold=100 -mllvm -enable-partial-unswitch -mllvm -loop-unswitch-threshold=200000 -mllvm -vector-library=LIBMVEC -mllvm -inline-threshold=1000 -DSPEC_OPENMP -fopenmp -DUSE_OPENMP -fopenmp=libomp -lomp -lpthread -ldl -ljemalloc 631.deepsjeng_s: Same as 620.omnetpp_s 641.leela_s: Same as 620.omnetpp_s Fortran benchmarks: -flto -Wl,-mllvm -Wl,-function-specialize -Wl,-mllvm -Wl,-region-vectorize -Wl,-mllvm -Wl,-vector-library=LIBMVEC -Wl,-mllvm -Wl,-reduce-array-computations=3 -ffast-math -Wl,-mllvm -Wl,-inline-recursion=4 -Wl,-mllvm -Wl,-lsr-in-nested-loop -Wl,-mllvm -Wl,-enable-iv-split -O3 -march=znver2 -funroll-loops -Mrecursive -mllvm -vector-library=LIBMVEC -mllvm -disable-indvar-simplify -mllvm -unroll-aggressive -mllvm -unroll-threshold=150 -DSPEC_OPENMP -fopenmp -DUSE_OPENMP -fopenmp=libomp -lomp -lpthread -ldl -lmvec -lamdlibm -ljemalloc -lflang Peak Other Flags ---------------- C benchmarks: -Wno-return-type C++ benchmarks (except as noted below): -Wno-return-type 623.xalancbmk_s: -Wno-return-type -L/sppo/dev/cpu2017/v110/amd_speed_aocc200_rome_C_lib/32 Fortran benchmarks: -Wno-return-type The flags files that were used to format this result can be browsed at http://www.spec.org/cpu2017/flags/HPE-Platform-Flags-AMD-V1.2-EPYC-revH.html http://www.spec.org/cpu2017/flags/aocc200-flags-C1-HPE.html You can also download the XML flags sources by saving the following links: http://www.spec.org/cpu2017/flags/HPE-Platform-Flags-AMD-V1.2-EPYC-revH.xml http://www.spec.org/cpu2017/flags/aocc200-flags-C1-HPE.xml SPEC CPU and SPECspeed are registered trademarks of the Standard Performance Evaluation Corporation. All other brand and product names appearing in this result are trademarks or registered trademarks of their respective holders. ---------------------------------------------------------------------------------------------------------------------------------- For questions about this result, please contact the tester. For other inquiries, please contact info@spec.org. Copyright 2017-2020 Standard Performance Evaluation Corporation Tested with SPEC CPU(R)2017 v1.1.0 on 2019-02-14 09:22:17-0500. Report generated on 2020-04-28 15:29:06 by CPU2017 text formatter v6255. Originally published on 2020-04-28.