CPU2017 Flag Description
Dell Inc. PowerEdge R6625 (AMD EPYC 9684X 96-Core Processor)

Compilers: AMD Optimizing C/C++ Compiler Suite


Base Compiler Invocation

C benchmarks

C++ benchmarks

Fortran benchmarks


Peak Compiler Invocation

C benchmarks

C++ benchmarks

Fortran benchmarks


Base Portability Flags

600.perlbench_s

602.gcc_s

605.mcf_s

620.omnetpp_s

623.xalancbmk_s

625.x264_s

631.deepsjeng_s

641.leela_s

648.exchange2_s

657.xz_s


Peak Portability Flags

600.perlbench_s

602.gcc_s

605.mcf_s

620.omnetpp_s

623.xalancbmk_s

625.x264_s

631.deepsjeng_s

641.leela_s

648.exchange2_s

657.xz_s


Base Optimization Flags

C benchmarks

C++ benchmarks

Fortran benchmarks


Peak Optimization Flags

C benchmarks

600.perlbench_s

602.gcc_s

605.mcf_s

625.x264_s

657.xz_s

C++ benchmarks

620.omnetpp_s

623.xalancbmk_s

631.deepsjeng_s

641.leela_s

Fortran benchmarks


Base Other Flags

C benchmarks

C++ benchmarks

Fortran benchmarks


Peak Other Flags

C benchmarks

C++ benchmarks

Fortran benchmarks


Implicitly Included Flags

This section contains descriptions of flags that were included implicitly by other flags, but which do not have a permanent home at SPEC.


Commands and Options Used to Submit Benchmark Runs

Using numactl to bind processes and memory to cores

For multi-copy runs or single copy runs on systems with multiple sockets, it is advantageous to bind a process to a particular core. Otherwise, the OS may arbitrarily move your process from one core to another. This can affect performance. To help, SPEC allows the use of a "submit" command where users can specify a utility to use to bind processes. We have found the utility 'numactl' to be the best choice.

numactl runs processes with a specific NUMA scheduling or memory placement policy. The policy is set for a command and inherited by all of its children. The numactl flag "--physcpubind" specifies which core(s) to bind the process. "-l" instructs numactl to keep a process's memory on the local node while "-m" specifies which node(s) to place a process's memory. For full details on using numactl, please refer to your Linux documentation, 'man numactl'

Note that some older versions of numactl incorrectly interpret application arguments as its own. For example, with the command "numactl --physcpubind=0 -l a.out -m a", numactl will interpret a.out's "-m" option as its own "-m" option. To work around this problem, we put the command to be run in a shell script and then run the shell script using numactl. For example: "echo 'a.out -m a' > run.sh ; numactl --physcpubind=0 bash run.sh"


Shell, Environment, and Other Software Settings

numactl --interleave=all runcpu

numactl --interleave=all runcpu executes the SPEC CPU command runcpu so that memory is consumed across NUMA nodes rather than consumed from a single node. This helps prevent local node out-of-memory conditions which can occur when runcpu is executed without interleaving. For full details on using numactl, please refer to your Linux documentation, 'man numactl'

Transparent Huge Pages (THP)

THP is an abstraction layer that automates most aspects of creating, managing, and using huge pages. It is designed to hide much of the complexity in using huge pages from system administrators and developers. Huge pages increase the memory page size from 4 kilobytes to 2 megabytes. This provides significant performance advantages on systems with highly contended resources and large memory workloads. If memory utilization is too high or memory is badly fragmented which prevents huge pages being allocated, the kernel will assign smaller 4k pages instead. Most recent Linux OS releases have THP enabled by default.

THP usage is controlled by the sysfs setting /sys/kernel/mm/transparent_hugepage/enabled. Possible values:

The SPEC CPU benchmark codes themselves never explicitly request huge pages, as the mechanism to do that is OS-specific and can change over time. Libraries such as amdalloc which are used by the benchmarks may explicitly request huge pages, and use of such libraries can make the "madvise" setting relevant and useful.

When no huge pages are immediately available and one is requested, how the system handles the request for THP creation is controlled by the sysfs setting /sys/kernel/mm/transparent_hugepage/defrag. Possible values:

An application that "always" requests THP often can benefit from waiting for an allocation until those huge pages can be assembled.
For more information see the Linux transparent hugepage documentation.

ulimit -s <n>

Sets the stack size to n kbytes, or unlimited to allow the stack size to grow without limit.

ulimit -l <n>

Sets the maximum size of memory that may be locked into physical memory.

powersave -f (on SuSE)

Makes the powersave daemon set the CPUs to the highest supported frequency.

/etc/init.d/cpuspeed stop (on Red Hat)

Disables the cpu frequency scaling program in order to set the CPUs to the highest supported frequency.

LD_LIBRARY_PATH

An environment variable that indicates the location in the filesystem of bundled libraries to use when running the benchmark binaries.

LIBOMP_NUM_HIDDEN_HELPER_THREADS

target nowait is supported via hidden helper task, which is a task not bound to any parallel region. A hidden helper team with a number of threads is created when the first hidden helper task is encountered.

The number of threads can be configured via the environment variable LIBOMP_NUM_HIDDEN_HELPER_THREADS. The default is 8. If LIBOMP_NUM_HIDDEN_HELPER_THREADS is 0, the hidden helper task is disabled and support falls back to a regular OpenMP task. The hidden helper task can also be disabled by setting the environment variable LIBOMP_USE_HIDDEN_HELPER_TASK=OFF.

sysctl -w vm.dirty_ratio=8

Limits dirty cache to 8% of memory.

sysctl -w vm.swappiness=1

Limits swap usage to minimum necessary.

sysctl -w vm.zone_reclaim_mode=1

Frees local node memory first to avoid remote memory usage.

kernel/numa_balancing

This OS setting controls automatic NUMA balancing on memory mapping and process placement. NUMA balancing incurs overhead for no benefit on workloads that are already bound to NUMA nodes.

Possible settings:

For more information see the numa_balancing entry in the Linux sysctl documentation.

kernel/randomize_va_space (ASLR)

This setting can be used to select the type of process address space randomization. Defaults differ based on whether the architecture supports ASLR, whether the kernel was built with the CONFIG_COMPAT_BRK option or not, or the kernel boot options used.

Possible settings:

Disabling ASLR can make process execution more deterministic and runtimes more consistent. For more information see the randomize_va_space entry in the Linux sysctl documentation.

vm/drop_caches

The two commands are equivalent: echo 3> /proc/sys/vm/drop_caches and sysctl -w vm.drop_caches=3 Both must be run as root. The commands are used to free up the filesystem page cache, dentries, and inodes.

Possible settings:

MALLOC_CONF

The amdalloc library is a variant of jemalloc library. The amdalloc library has tunable parameters, many of which may be changed at run-time via several mechanisms, one of which is the MALLOC_CONF environment variable. Other methods, as well as the order in which they're referenced, are detailed in the jemalloc documentation's TUNING section.

The options that can be tuned at run-time are everything in the jemalloc documentation's MALLCTL NAMESPACE section that begins with "opt.".

The options that may be encountered in SPEC CPU 2017 results are detailed here:

PGHPF_ZMEM

An environment variable used to initialize the allocated memory. Setting PGHPF_ZMEM to "Yes" has the effect of initializing all allocated memory to zero.

GOMP_CPU_AFFINITY

This environment variable is used to set the thread affinity for threads spawned by OpenMP.

OMP_DYNAMIC

This environment variable is defined as part of the OpenMP standard. Setting it to "false" prevents the OpenMP runtime from dynamically adjusting the number of threads to use for parallel execution.

For more information, see chapter 4 ("Environment Variables") in the OpenMP 4.5 Specification.

OMP_SCHEDULE

This environment variable is defined as part of the OpenMP standard. Setting it to "static" causes loop iterations to be assigned to threads in round-robin fashion in the order of the thread number.

For more information, see chapter 4 ("Environment Variables") in the OpenMP 4.5 Specification.

OMP_STACKSIZE

This environment variable is defined as part of the OpenMP standard and controls the size of the stack for threads created by OpenMP.

For more information, see chapter 4 ("Environment Variables") in the OpenMP 4.5 Specification.

OMP_THREAD_LIMIT

This environment variable is defined as part of the OpenMP standard and limits the maximum number of OpenMP threads that can be created.

For more information, see chapter 4 ("Environment Variables") in the OpenMP 4.5 Specification.


Operating System Tuning Parameters

kernel.randomize_va_space (ASLR)
This setting can be used to select the type of process address space randomization. Defaults differ based on whether the architecture supports ASLR, whether the kernel was built with the CONFIG_COMPAT_BRK option or not, or the kernel boot options used.
Possible settings: Disabling ASLR can make process execution more deterministic and runtimes more consistent. For more information see the randomize_va_space entry in the Linux sysctl documentation.

Transparent Hugepages (THP)
THP is an abstraction layer that automates most aspects of creating, managing, and using huge pages. It is designed to hide much of the complexity in using huge pages from system administrators and developers. Huge pages increase the memory page size from 4 kilobytes to 2 megabytes. This provides significant performance advantages on systems with highly contended resources and large memory workloads. If memory utilization is too high or memory is badly fragmented which prevents hugepages being allocated, the kernel will assign smaller 4k pages instead. Most recent Linux OS releases have THP enabled by default.
THP usage is controlled by the sysfs setting /sys/kernel/mm/transparent_hugepage/enabled. Possible values: THP creation is controlled by the sysfs setting /sys/kernel/mm/transparent_hugepage/defrag. Possible values: An application that "always" requests THP often can benefit from waiting for an allocation until those huge pages can be assembled.
For more information see the Linux transparent hugepage documentation.

drop_caches
sysctl is used to change kernel parameters at run-time.
-w vm.drop_caches=3 - clears filesystem caches

tuned-adm
This command line utility allows you to switch between user definable tuning profiles. Several predefined profiles are already included. You can even create your own profile, either based on one of the existing ones by copying it or make a completely new one. The distribution provided profiles are stored in subdirectories below /usr/lib/tuned and the user defined profiles in subdirectories below /etc/tuned. If there are profiles with the same name in both places, user defined profiles have precedence.

Profile used:
- throughput-performance - Broadly applicable tuning that provides excellent performance across a variety of common server workloads

Firmware / BIOS / Microcode Settings


DRAM Refresh Delay
Default: Minimum

Memory Interleaving
Default: Auto

Memory interleaving is supported if a symmetric memory configuration is installed. When set to Disabled, the system supports Non-Uniform Memory Access (NUMA) (asymmetric) memory configurations. Operating Systems that are NUMA-aware understand the distribution of memory in a particular system and can intelligently allocate memory in an optimal manner. Operating Systems that are not NUMA-aware could allocate memory to a processor that is not local, resulting in a loss of performance. Die and Socket interleaving should only be enabled for Operarting Systems that are not NUMA-aware.

DIMM Self Healing on Uncorrectable Memory Error
Default: Enabled

Post Package Repair (PPR) on Uncorrectable Memory Error
Disabling this feature may improve memory performance for some workloads.

Logical Processor
Default: Enabled

Each processor core supports up to two logical processors. When set to Enabled, the BIOS reports all logical processors. When set to Disabled, the BIOS only reports one logical processor per core. Generally, higher processor count results in increased performance for most multi-threaded workloads and the recommendation is to keep this enabled. However, there are some floating point/scientific workloads, including HPC workloads, where disabling this feature may result in higher performance.

Virtualization Technology
Default: Enabled

When set to Enabled, the BIOS will enable processor Virtualization features and provide the virtualization support to the Operating System (OS) through the DMAR table. In general, only virtualized environments such as VMware(r) ESX (tm), Microsoft Hyper-V(r) , Red Hat(r) KVM, and other virtualized operating systems will take advantage of these features. Disabling this feature is not known to significantly alter the performance or power characteristics of the system, so leaving this option Enabled is advised for most cases.

L1 Stride Prefetcher
Default: Enabled

When this setting is Enabled, the processor provides additional fetch to the data access for an individual instruction for performance tuning by controlling the L1 stride prefetcher setting. Use the recommended setting and this option will allow for optmizing overall.

NUMA Nodes per Socket
Default: 1

Allows configuration of the memory NUMA domains per socket. The configuration can consist of one whole doman (NPS1), two domains (NPS2) or four domains (NPS4).
In the case of two-socket platforms, an additional NPS profile is available to have whole system memory be mapped as a single NUMA domain (NPS0).

L3 Cache as NUMA Domain
Default: Disabled

This field specifies that each CCX within the processor will be declared as a NUMA domain.

System Profile
Default: Performance Per Watt (OS)

When set to a mode other than Custom, BIOS will set each option accordingly. When set to Custom, each option setting can be changed.

CPU Power Management
Default: OS DBPM

Allows selection of CPU power management methodology.

C-States
Default: Enabled

C-States allow the processor to enter lower power states when idle.
When set to Enabled (OS Controlled) or when set to Autonomous (if Hardware control is supported), the processor can operate in all available Power States to save power, but may increase memory latency and frequency jitter.

Memory Patrol Scrub
Default: Standard

Patrol Scrubbing searches the memory for errors and repairs correctable errors to prevent the accumulation of memory errors.

PCI ASPM L1 Link Power Management
Default: Enabled

When enabled, PCIe Advanced State Power Management (ASPM) can reduce overall system power a bit while slightly reducing system performance. NOTE: Some devices may not perform properly (they may hang or cause the system to hang) when ASPM is enabled. For this reason L1 will only be enabled for validated qualified cards.

Determinism Slider
Default: Performance Determinism

Controls whether BIOS will enable determinism to control performance.

CPU Interconnect Bus Link Power Management
Default: Enabled

When Enabled, CPU interconnect bus link power management can reduce overall system power a bit while slightly reducing system performance.

Algorithm Performance Boost Disable (ApbDis)
Default: Disabled

Fan Speed Offset
Default: Off

Configuring this option allows additional cooling to the server. In case hardware is added (example, new PCIe cards), it may require additional cooling. A fan speed offset causes fan speeds to increase (by the offset % value) over baseline fan speeds calculated by the Thermal Control algorithm.

Flag description origin markings:

[user] Indicates that the flag description came from the user flags file.
[suite] Indicates that the flag description came from the suite-wide flags file.
[benchmark] Indicates that the flag description came from a per-benchmark flags file.

The flags files that were used to format this result can be browsed at
http://www.spec.org/cpu2017/flags/aocc400-flags.html,
http://www.spec.org/cpu2017/flags/Dell-Platform-Flags-PowerEdge-AMD-EPYC-v1.1.html.

You can also download the XML flags sources by saving the following links:
http://www.spec.org/cpu2017/flags/aocc400-flags.xml,
http://www.spec.org/cpu2017/flags/Dell-Platform-Flags-PowerEdge-AMD-EPYC-v1.1.xml.


For questions about the meanings of these flags, please contact the tester.
For other inquiries, please contact info@spec.org
Copyright 2017-2023 Standard Performance Evaluation Corporation
Tested with SPEC CPU2017 v1.1.9.
Report generated on 2023-06-13 15:17:58 by SPEC CPU2017 flags formatter v5178.