Cisco Systems Cisco UCS C245 M8 (AMD EPYC 9754) |
SPEChpc 2021_sml_base = 0.691 |
SPEChpc 2021_sml_peak = Not Run |
hpc2021 License: | 9019 | Test Date: | May-2024 |
---|---|---|---|
Test Sponsor: | Cisco Systems | Hardware Availability: | Jun-2024 |
Tested by: | Cisco Systems | Software Availability: | Feb-2024 |
Benchmark result graphs are available in the PDF report.
Benchmark | Base | Peak | ||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Model | Ranks | Thrds/Rnk | Seconds | Ratio | Seconds | Ratio | Seconds | Ratio | Model | Ranks | Thrds/Rnk | Seconds | Ratio | Seconds | Ratio | Seconds | Ratio | |
SPEChpc 2021_sml_base | 0.691 | |||||||||||||||||
SPEChpc 2021_sml_peak | Not Run | |||||||||||||||||
Results appear in the order in which they were run. Bold underlined text indicates a median measurement. | ||||||||||||||||||
605.lbm_s | MPI | 128 | 1 | 1607 | 0.965 | 1837 | 0.844 | 1765 | 0.878 | |||||||||
613.soma_s | MPI | 128 | 1 | 1426 | 1.12 | 1426 | 1.12 | 1427 | 1.12 | |||||||||
618.tealeaf_s | MPI | 128 | 1 | 6089 | 0.337 | 6086 | 0.337 | 6087 | 0.337 | |||||||||
619.clvleaf_s | MPI | 128 | 1 | 4567 | 0.361 | 4566 | 0.361 | 4567 | 0.361 | |||||||||
621.miniswp_s | MPI | 128 | 1 | 794 | 1.39 | 782 | 1.41 | 803 | 1.37 | |||||||||
628.pot3d_s | MPI | 128 | 1 | 4890 | 0.343 | 4892 | 0.342 | 4890 | 0.343 | |||||||||
632.sph_exa_s | MPI | 128 | 1 | 1127 | 2.04 | 1127 | 2.04 | 1127 | 2.04 | |||||||||
634.hpgmgfv_s | MPI | 128 | 1 | 2367 | 0.412 | 2379 | 0.410 | 2376 | 0.410 | |||||||||
635.weather_s | MPI | 128 | 1 | 3452 | 0.753 | 3449 | 0.754 | 3452 | 0.753 |
Hardware Summary | |
---|---|
Type of System: | Homogenous |
Compute Node: | Cisco UCS C245 M8 |
Compute Nodes Used: | 1 |
Total Chips: | 1 |
Total Cores: | 128 |
Total Threads: | 256 |
Total Memory: | 768 GB |
Software Summary | |
---|---|
Compiler: | Intel oneAPI DPC++/C++ Compiler 2024.0.2 |
MPI Library: | Intel MPI Library for Linux OS, Build 20231005 |
Other MPI Info: | None |
Other Software: | None |
Base Parallel Model: | MPI |
Base Ranks Run: | 128 |
Base Threads Run: | 1 |
Peak Parallel Models: | Not Run |
Hardware | |
---|---|
Number of nodes: | 1 |
Uses of the node: | compute |
Vendor: | Cisco Systems |
Model: | Cisco UCS C245 M8 |
CPU Name: | AMD EPYC 9754 |
CPU(s) orderable: | 1,2 chips |
Chips enabled: | 1 |
Cores enabled: | 128 |
Cores per chip: | 128 |
Threads per core: | 2 |
CPU Characteristics: | Max. Boost Clock upto 3.1GHz |
CPU MHz: | 2250 |
Primary Cache: | 32 KB I + 32 KB D on chip per core |
Secondary Cache: | 1 MB I+D on chip per core |
L3 Cache: | 256 MB I+D on chip per chip 16 MB shared / 8 cores |
Other Cache: | None |
Memory: | 768 GB (12 x 64 GB 2Rx4 PC5-5600B-R, running at 4800 MHz) |
Disk Subsystem: | 1 x 960 GB NVMe SSD |
Other Hardware: | None |
Accel Count: | 0 |
Accel Model: | None |
Accel Vendor: | None |
Accel Type: | None |
Accel Connection: | None |
Accel ECC enabled: | None |
Accel Description: | None |
Adapter: | None |
Number of Adapters: | 0 |
Slot Type: | None |
Data Rate: | None |
Ports Used: | 0 |
Interconnect Type: | None |
Software | |
---|---|
Adapter: | None |
Adapter Driver: | None |
Adapter Firmware: | None |
Operating System: | SUSE Linux Enterprise Server 15 SP5 Kernel 5.14.21-150500.53-default |
Local File System: | xfs |
Shared File System: | None |
System State: | Multi-user, run level 3 |
Other Software: | None |
The config file option 'submit' was used. mpirun --bind-to core:overload-allowed --oversubscribe --mca topo basic -np $ranks $command
MPI startup command: mpirun command was used to start MPI jobs.
============================================================================== CXXC 632.sph_exa_s(base) ------------------------------------------------------------------------------ Intel(R) oneAPI DPC++/C++ Compiler 2024.0.2 (2024.0.2.20231213) Target: x86_64-unknown-linux-gnu Thread model: posix InstalledDir: /home/intel_tools/compiler/compiler/2024.0/bin/compiler Configuration file: /home/intel_tools/compiler/compiler/2024.0/bin/compiler/../icpx.cfg ------------------------------------------------------------------------------ ============================================================================== CC 605.lbm_s(base) 613.soma_s(base) 618.tealeaf_s(base) 621.miniswp_s(base) 634.hpgmgfv_s(base) ------------------------------------------------------------------------------ Intel(R) oneAPI DPC++/C++ Compiler 2024.0.2 (2024.0.2.20231213) Target: x86_64-unknown-linux-gnu Thread model: posix InstalledDir: /home/intel_tools/compiler/compiler/2024.0/bin/compiler Configuration file: /home/intel_tools/compiler/compiler/2024.0/bin/compiler/../icx.cfg ------------------------------------------------------------------------------ ============================================================================== FC 619.clvleaf_s(base) 635.weather_s(base) ------------------------------------------------------------------------------ ifx (IFX) 2024.0.2 20231213 Copyright (C) 1985-2023 Intel Corporation. All rights reserved. ------------------------------------------------------------------------------ ============================================================================== FC 628.pot3d_s(base) ------------------------------------------------------------------------------ ifx: command line warning #10157: ignoring option '-W'; argument is of wrong type ifx (IFX) 2024.0.2 20231213 Copyright (C) 1985-2023 Intel Corporation. All rights reserved. ------------------------------------------------------------------------------
mpiicc -cc=icx |
mpiicpc -cxx=icpx |
mpiifort -fc=ifx |
605.lbm_s: | -lstdc++ |
613.soma_s: | -lstdc++ |
618.tealeaf_s: | -lstdc++ |
619.clvleaf_s: | -lstdc++ |
621.miniswp_s: | -lstdc++ |
628.pot3d_s: | -lstdc++ |
632.sph_exa_s: | -lstdc++ |
634.hpgmgfv_s: | -lstdc++ |
635.weather_s: | -lstdc++ |
-Ofast -ipo -mprefer-vector-width=512 -march=common-avx512 -ansi-alias |
-Ofast -ipo -mprefer-vector-width=512 -march=common-avx512 -ansi-alias |
-Ofast -ipo -mprefer-vector-width=512 -march=common-avx512 -nostandard-realloc-lhs -align array64byte |