CPU2017 Flag Description
Dell Inc. PowerEdge C6620 (Intel Xeon Platinum 8470)

Copyright © 2016 Intel Corporation. All Rights Reserved.


Base Compiler Invocation

C benchmarks

Fortran benchmarks

Benchmarks using both Fortran and C

Benchmarks using Fortran, C, and C++


Peak Compiler Invocation

C benchmarks

Fortran benchmarks

Benchmarks using both Fortran and C

Benchmarks using Fortran, C, and C++


Base Portability Flags

603.bwaves_s

607.cactuBSSN_s

619.lbm_s

621.wrf_s

627.cam4_s

628.pop2_s

638.imagick_s

644.nab_s

649.fotonik3d_s

654.roms_s


Peak Portability Flags

603.bwaves_s

607.cactuBSSN_s

619.lbm_s

621.wrf_s

627.cam4_s

628.pop2_s

638.imagick_s

644.nab_s

649.fotonik3d_s

654.roms_s


Base Optimization Flags

C benchmarks

Fortran benchmarks

Benchmarks using both Fortran and C

Benchmarks using Fortran, C, and C++


Peak Optimization Flags

C benchmarks

619.lbm_s

638.imagick_s

644.nab_s

Fortran benchmarks

603.bwaves_s

649.fotonik3d_s

654.roms_s

Benchmarks using both Fortran and C

621.wrf_s

627.cam4_s

628.pop2_s

Benchmarks using Fortran, C, and C++

607.cactuBSSN_s


Implicitly Included Flags

This section contains descriptions of flags that were included implicitly by other flags, but which do not have a permanent home at SPEC.


Commands and Options Used to Submit Benchmark Runs

submit= MYMASK=`printf '0x%x' $((1<<$SPECCOPYNUM))`; /usr/bin/taskset $MYMASK $command
When running multiple copies of benchmarks, the SPEC config file feature submit is used to cause individual jobs to be bound to specific processors. This specific submit command, using taskset, is used for Linux64 systems without numactl.
Here is a brief guide to understanding the specific command which will be found in the config file:
submit= numactl --localalloc --physcpubind=$SPECCOPYNUM $command
When running multiple copies of benchmarks, the SPEC config file feature submit is used to cause individual jobs to be bound to specific processors. This specific submit command is used for Linux64 systems with support for numactl.
Here is a brief guide to understanding the specific command which will be found in the config file:

Shell, Environment, and Other Software Settings

numactl --interleave=all "runspec command"
Launching a process with numactl --interleave=all sets the memory interleave policy so that memory will be allocated using round robin on nodes. When memory cannot be allocated on the current interleave target fall back to other nodes.
KMP_STACKSIZE
Specify stack size to be allocated for each thread.
KMP_AFFINITY
Syntax: KMP_AFFINITY=[<modifier>,...]<type>[,<permute>][,<offset>]
The value for the environment variable KMP_AFFINITY affects how the threads from an auto-parallelized program are scheduled across processors.
It applies to binaries built with -qopenmp and -parallel (Linux and Mac OS X) or /Qopenmp and /Qparallel (Windows).
modifier:
    granularity=fine Causes each OpenMP thread to be bound to a single thread context.
type:
    compact Specifying compact assigns the OpenMP thread <n>+1 to a free thread context as close as possible to the thread context where the <n> OpenMP thread was placed.
    scatter Specifying scatter distributes the threads as evenly as possible across the entire system.
permute: The permute specifier is an integer value controls which levels are most significant when sorting the machine topology map. A value for permute forces the mappings to make the specified number of most significant levels of the sort the least significant, and it inverts the order of significance.
offset: The offset specifier indicates the starting position for thread assignment.

Please see the Thread Affinity Interface article in the Intel Composer XE Documentation for more details.

Example: KMP_AFFINITY=granularity=fine,scatter
Specifying granularity=fine selects the finest granularity level and causes each OpenMP or auto-par thread to be bound to a single thread context.
This ensures that there is only one thread per core on cores supporting HyperThreading Technology
Specifying scatter distributes the threads as evenly as possible across the entire system.
Hence a combination of these two options, will spread the threads evenly across sockets, with one thread per physical core.

Example: KMP_AFFINITY=compact,1,0
Specifying compact will assign the n+1 thread to a free thread context as close as possible to thread n.
A default granularity=core is implied if no granularity is explicitly specified.
Specifying 1,0 sets permute and offset values of the thread assignment.
With a permute value of 1, thread n+1 is assigned to a consecutive core. With an offset of 0, the process's first thread 0 will be assigned to thread 0.
The same behavior is exhibited in a multisocket system.
OMP_NUM_THREADS
Sets the maximum number of threads to use for OpenMP* parallel regions if no other value is specified in the application. This environment variable applies to both -qopenmp and -parallel (Linux and Mac OS X) or /Qopenmp and /Qparallel (Windows). Example syntax on a Linux system with 8 cores: export OMP_NUM_THREADS=8
Set stack size to unlimited
The command "ulimit -s unlimited" is used to set the stack size limit to unlimited.
Free the file system page cache
The command "echo 3> /proc/sys/vm/drop_caches" is used to free up the filesystem page cache as well as reclaimable slab objects like dentries and inodes.

Red Hat Specific features

Transparent Huge Pages
On RedHat EL 6 and later, Transparent Hugepages increase the memory page size from 4 kilobytes to 2 megabytes. Transparent Hugepages provide significant performance advantages on systems with highly contended resources and large memory workloads. If memory utilization is too high or memory is badly fragmented which prevents hugepages being allocated, the kernel will assign smaller 4k pages instead.
Hugepages are used by default unless the /sys/kernel/mm/redhat_transparent_hugepage/enabled field is changed from its RedHat EL6 default of 'always'.

Operating System Tuning Parameters

tuned-adm
This command line utility allows you to switch between user definable tuning profiles. Several predefined profiles are already included. You can even create your own profile, either based on one of the existing ones by copying it or make a completely new one. The distribution provided profiles are stored in subdirectories below /usr/lib/tuned and the user defined profiles in subdirectories below /etc/tuned. If there are profiles with the same name in both places, user defined profiles have precedence.

Profile used:
- throughput-performance - Broadly applicable tuning that provides excellent performance across a variety of common server workloads

Firmware / BIOS / Microcode Settings


ADDDC Setting
Default: Enabled

Adaptive Double DRAM Device Correction
When enabled, failing DRAMs are dynamically mapped out. This action can have some impact on system performance under certain workloads. This feature only applies to x4 DIMMs and when Fault Resilient Mode (FRM) is disabled.

DIMM Self Healing on Uncorrectable Memory Error
Default: Enabled

Post Package Repair (PPR) on Uncorrectable Memory Error
Disabling this feature may improve memory performance for some workloads.

Logical Processor
Default: Enabled

Each processor core supports up to two logical processors. When set to Enabled, the BIOS reports all logical processors. When set to Disabled, the BIOS only reports one logical processor per core. Generally, higher processor count results in increased performance for most multi-threaded workloads and the recommendation is to keep this enabled. However, there are some floating point/scientific workloads, including HPC workloads, where disabling this feature may result in higher performance.

Virtualization Technology
Default: Enabled

When set to Enabled, the BIOS will enable processor Virtualization features and provide the virtualization support to the Operating System (OS) through the DMAR table. In general, only virtualized environments such as VMware(r) ESX (tm), Microsoft Hyper-V(r) , Red Hat(r) KVM, and other virtualized operating systems will take advantage of these features. Disabling this feature is not known to significantly alter the performance or power characteristics of the system, so leaving this option Enabled is advised for most cases.

Sub NUMA Cluster
Default: Disabled

When set to Enabled / 2-way Clustering / 4-way Clustering:
Sub NUMA Clustering (SNC) is a feature for breaking up the LLC into disjoint clusters based on address range, with each cluster bound to a subset of the memory controllers in the system. It improves average latency to the LLC.

UPI Prefetch
Default: Enabled

Mechanism to get the memory read started early on DDR bus. The UPI Rx path will spawn a MemSpecRd to iMC directly.

System Profile
Default: Performance Per Watt (DAPC)

When set to Custom, other settings can changed for Memory Patrol Scrub, CPU Power Management, CIE, C States, Energy Efficiency Policy.

CPU Power Management
Default: System DBPM (DAPC)

Allows selection of CPU power management methodology.

C1E
Default: Enabled

When set to Enabled, the processor is allowed to switch to minimum performance state when idle.

C States
Default: Enabled

C States allow the processor to enter lower power states when idle. When set to Enabled (OS controlled) or when set to Autonomous (if Hardware controlled is supported), the processor can operate in all available Power States to save power, but may increase memory latency and frequency jitter.

Memory Patrol Scrub
Default: Standard

Patrol Scrubbing searches the memory for errors and repairs correctable errors to prevent the accumulation of memory errors. When set to Disabled, no patrol scrubbing will occur. When set to Standard Mode, the entire memory array will be scrubbed once in a 24 hour period. When set to Extended Mode, the entire memory array will be scrubbed more frequently to further increase system reliability.

Energy Efficiency Policy
Default: Balanced Performance

The CPU uses the setting to manipulate the internal behavior of the processor.

CPU Interconnect Bus Link Power Management
Default: Enabled

When Enabled, CPU interconnect bus link power management can reduce overall system power a bit while slightly reducing system performance.

PCI ASPM L1 Link Power Management
Default: Enabled

When Enabled, PCIe Advanced State Power Management (ASPM) can reduce overall system power a bit while slightly reducing system performance.

DCU Streamer Prefetcher
Default: Enabled

Data Cache Unit (DCU) - this setting can affect performance depending on the application running on the server. Recommended for High Performance Computing applications.

LLC Prefetch
Default: Enabled

Disable LLC (Last Level Cache) Prefetch on all threads.

Dead Line LLC Alloc
Default: Enabled
Enabled - opportunistically fill dead lines in the LLC (Last Level Cache).
Disabled - never fill dead lines in LLC.

Optimizer Mode
Default: Auto

Settings: Tunes for CPU Performance. This option appears when CPUs are liquid cooled.

Fan Speed Offset
Default: Off

Configuring this option allows additional cooling to the server. In case hardware is added (example, new PCIe cards), it may require additional cooling. A fan speed offset causes fan speeds to increase (by the offset % value) over baseline fan speeds calculated by the Thermal Control algorithm.

Flag description origin markings:

[user] Indicates that the flag description came from the user flags file.
[suite] Indicates that the flag description came from the suite-wide flags file.
[benchmark] Indicates that the flag description came from a per-benchmark flags file.

The flags files that were used to format this result can be browsed at
http://www.spec.org/cpu2017/flags/Intel-ic2023-official-linux64.html,
http://www.spec.org/cpu2017/flags/Dell-Platform-Flags-PowerEdge-Intel-Xeon-v1.5.html.

You can also download the XML flags sources by saving the following links:
http://www.spec.org/cpu2017/flags/Intel-ic2023-official-linux64.xml,
http://www.spec.org/cpu2017/flags/Dell-Platform-Flags-PowerEdge-Intel-Xeon-v1.5.xml.


For questions about the meanings of these flags, please contact the tester.
For other inquiries, please contact info@spec.org
Copyright 2017-2023 Standard Performance Evaluation Corporation
Tested with SPEC CPU2017 v1.1.9.
Report generated on 2023-07-19 16:25:26 by SPEC CPU2017 flags formatter v5178.