CPU2006 Flag Description
Hewlett Packard Enterprise ProLiant DL560 Gen10 (2.60 GHz, Intel Xeon Gold 6132)

Test sponsored by HPE

Copyright © 2016 Intel Corporation. All Rights Reserved.


Base Compiler Invocation

C benchmarks

C++ benchmarks

Fortran benchmarks

Benchmarks using both Fortran and C


Base Portability Flags

410.bwaves

416.gamess

433.milc

434.zeusmp

435.gromacs

436.cactusADM

437.leslie3d

444.namd

447.dealII

450.soplex

453.povray

454.calculix

459.GemsFDTD

465.tonto

470.lbm

481.wrf

482.sphinx3


Base Optimization Flags

C benchmarks

C++ benchmarks

Fortran benchmarks

Benchmarks using both Fortran and C


Implicitly Included Flags

This section contains descriptions of flags that were included implicitly by other flags, but which do not have a permanent home at SPEC.


Commands and Options Used to Submit Benchmark Runs

submit= MYMASK=`printf '0x%x' $((1<<$SPECCOPYNUM))`; /usr/bin/taskset $MYMASK $command
When running multiple copies of benchmarks, the SPEC config file feature submit is used to cause individual jobs to be bound to specific processors. This specific submit command, using taskset, is used for Linux64 systems without numactl.
Here is a brief guide to understanding the specific command which will be found in the config file:
submit= numactl --localalloc --physcpubind=$SPECCOPYNUM $command
When running multiple copies of benchmarks, the SPEC config file feature submit is used to cause individual jobs to be bound to specific processors. This specific submit command is used for Linux64 systems with support for numactl.
Here is a brief guide to understanding the specific command which will be found in the config file:

Shell, Environment, and Other Software Settings

numactl --interleave=all "runspec command"
Launching a process with numactl --interleave=all sets the memory interleave policy so that memory will be allocated using round robin on nodes. When memory cannot be allocated on the current interleave target fall back to other nodes.
KMP_STACKSIZE
Specify stack size to be allocated for each thread.
KMP_AFFINITY
Syntax: KMP_AFFINITY=[<modifier>,...]<type>[,<permute>][,<offset>]
The value for the environment variable KMP_AFFINITY affects how the threads from an auto-parallelized program are scheduled across processors.
It applies to binaries built with -qopenmp and -parallel (Linux and Mac OS X) or /Qopenmp and /Qparallel (Windows).
modifier:
    granularity=fine Causes each OpenMP thread to be bound to a single thread context.
type:
    compact Specifying compact assigns the OpenMP thread <n>+1 to a free thread context as close as possible to the thread context where the <n> OpenMP thread was placed.
    scatter Specifying scatter distributes the threads as evenly as possible across the entire system.
permute: The permute specifier is an integer value controls which levels are most significant when sorting the machine topology map. A value for permute forces the mappings to make the specified number of most significant levels of the sort the least significant, and it inverts the order of significance.
offset: The offset specifier indicates the starting position for thread assignment.

Please see the Thread Affinity Interface article in the Intel Composer XE Documentation for more details.

Example: KMP_AFFINITY=granularity=fine,scatter
Specifying granularity=fine selects the finest granularity level and causes each OpenMP or auto-par thread to be bound to a single thread context.
This ensures that there is only one thread per core on cores supporting HyperThreading Technology
Specifying scatter distributes the threads as evenly as possible across the entire system.
Hence a combination of these two options, will spread the threads evenly across sockets, with one thread per physical core.

Example: KMP_AFFINITY=compact,1,0
Specifying compact will assign the n+1 thread to a free thread context as close as possible to thread n.
A default granularity=core is implied if no granularity is explicitly specified.
Specifying 1,0 sets permute and offset values of the thread assignment.
With a permute value of 1, thread n+1 is assigned to a consecutive core. With an offset of 0, the process's first thread 0 will be assigned to thread 0.
The same behavior is exhibited in a multisocket system.
OMP_NUM_THREADS
Sets the maximum number of threads to use for OpenMP* parallel regions if no other value is specified in the application. This environment variable applies to both -qopenmp and -parallel (Linux and Mac OS X) or /Qopenmp and /Qparallel (Windows). Example syntax on a Linux system with 8 cores: export OMP_NUM_THREADS=8
Set stack size to unlimited
The command "ulimit -s unlimited" is used to set the stack size limit to unlimited.
Free the file system page cache
The command "echo 1> /proc/sys/vm/drop_caches" is used to free up the filesystem page cache.

Red Hat Specific features

Transparent Huge Pages
On RedHat EL 6 and later, Transparent Hugepages increase the memory page size from 4 kilobytes to 2 megabytes. Transparent Hugepages provide significant performance advantages on systems with highly contended resources and large memory workloads. If memory utilization is too high or memory is badly fragmented which prevents hugepages being allocated, the kernel will assign smaller 4k pages instead.
Hugepages are used by default unless the /sys/kernel/mm/redhat_transparent_hugepage/enabled field is changed from its RedHat EL6 default of 'always'.

Operating System Tuning Parameters

OS Tuning

ulimit:

Used to set user limits of system-wide resources. Provides control over resources available to the shell and processes started by it. Some common ulimit commands may include:

Disabling Linux services:

Certain Linux services may be disabled to minimize tasks that may consume CPU cycles.

irqbalance:

Disabled through "service irqbalance stop". Depending on the workload involved, the irqbalance service reassigns various IRQ's to system CPUs. Though this service might help in some situations, disabling it can also help environments which need to minimize or eliminate latency to more quickly respond to events.

Performance Governors (Linux):

In-kernel CPU frequency governors are pre-configured power schemes for the CPU. The CPUfreq governors use P-states to change frequencies and lower power consumption. The dynamic governors can switch between CPU frequencies, based on CPU utilization to allow for power savings while not sacrificing performance.

Other options beside a generic performance governor can be set, such as the perf-bias:

--perf-bias, -b

On supported Intel processors, this option sets a register which allows the cpupower utility (or other software/firmware) to set a policy that controls the relative importance of performance versus energy savings to the processor. The range of valid numbers is 0-15, where 0 is maximum performance and 15 is maximum energy efficiency.

The processor uses this information in model-specific ways when it must select trade-offs between performance and energy efficiency. This policy hint does not supersede Processor Performance states (P-states) or CPU Idle power states (C-states), but allows software to have influence where it would otherwise be unable to express a preference.

On many Linux systems one can set the perf-bias for all CPUs through the cpupower utility with one of the following commands:

Tuning Kernel parameters:

The following Linux Kernel parameters were tuned to better optimize performance of some areas of the system:

tuned-adm:

The tuned-adm tool is a commandline interface for switching between different tuning profiles available to the tuned tuning daeomn available in supported Linux distros. The default configuration file is located in /etc/tuned.conf and the supported profiles can be found in /etc/tune-profiles.

Some profiles that may be available by default include: default, desktop-powersave, server-powersave, laptop-ac-powersave, laptop-battery-powersave, spindown-disk, throughput-performance, latency-performance, enterprise-storage

To set a profile, one can issue the command "tuned-adm profile (profile_name)". Here are details about relevant profiles.

Transparent Huge Pages (THP):

THP is an abstraction layer that automates most aspects of creating, managing, and using huge pages. THP is designed to hide much of the complexity in using huge pages from system administrators and developers, as normal huge pages must be assigned at boot time, can be difficult to manage manually, and often require significant changes to code in order to be used effectively. Transparent Hugepages increase the memory page size from 4 kilobytes to 2 megabytes. Transparent Hugepages provide significant performance advantages on systems with highly contended resources and large memory workloads. If memory utilization is too high or memory is badly fragmented which prevents hugepages being allocated, the kernel will assign smaller 4k pages instead. Most recent Linux OS releases have THP enabled by default.

Linux Huge Page settings:

If you need finer control and manually set the Huge Pages you can follow the below steps:

Note that further information about huge pages may be found in your Linux documentation file: /usr/src/linux/Documentation/vm/hugetlbpage.txt


Firmware / BIOS / Microcode Settings

Firmware Settings

One or more of the following settings may have been set. If so, the "Platform Notes" section of the report will say so; and you can read below to find out more about what these settings mean.

Intel Hyper-Threading (Default = Enabled):

This feature allows enabling or disabling of logical processor cores on processors supporting Intel Hyper-Threading (HT). When enabled, each physical processor core operates as two logical processor cores. When disabled, each physical core operates as only one logical processor core. Enabling this option can improve overall performance for applications that benefit from a higher processor core count.

Thermal Configuration (Default = Optimal Cooling):

This feature allows the user to select the fan cooling solution for the system. Values for this BIOS option can be:

Last Level Cache (LLC) Dead Line Allocation (Default = Enabled):

In the Skylake cache scheme, mid-level cache (MLC) evictions are filled into the last level cache (LLC). If a line is evicted from the MLC to the LLC, the Skylake core can flag the evicted MLC lines as "dead". This means that the lines are not likely to be read again. This option allows dead lines to be dropped and never fill the LLC if the option is disabled. Values for this BIOS option can be:

Stale A to S (Default = Disabled):

The in-memory directory has three states: invalid (I), snoopAll (A), and shared (S). Invalid (I) state means the data is clean and does not exist in any other socket`s cache. The snoopAll (A) state means the data may exist in another socket in exclusive or modified state. Shared (S) state means the data is clean and may be shared across one or more socket`s caches. When doing a read to memory, if the directory line is in the A state we must snoop all the other sockets because another socket may have the line in modified state. If this is the case, the snoop will return the modified data. However, it may be the case that a line is read in A state and all the snoops come back a miss. This can happen if another socket read the line earlier and then silently dropped it from its cache without modifying it. Values for this BIOS option can be:

Stale A to S may be beneficial in a workload where there are many cross-socket reads.

Last Level Cache (LLC) Prefetch (Default = Disabled):

This option configures the processor Last Level Cache (LLC) prefetch feature as a result of the non-inclusive cache architecture. The LLC prefetcher exists on top of other prefetchers that that can prefetch data in the core data cache unit (DCU) and mid-level cache(MLC). In some cases, setting this option to disabled can improve performance. Typically, setting this option to enable provides better performance. Values for this BIOS option can be:

NUMA Group Size Optimization (Default = Clustered):

This feature allows the user to configure how the BIOS reports the size of a NUMA node (number of logical processors), which assists the Operating System in grouping processors for application use (referred to as Kgroups). Values for this BIOS option can be:

Sub-NUMA Clustering (SNC) (Default = Enabled):

SNC breaks up the last level cache (LLC) into disjoint clusters based on address range, with each cluster bound to a subset of the memory controllers in the system. SNC improves average latency to the LLC and memory. SNC is a replacement for the cluster on die (COD) feature found in previous processor families. For a multi-socketed system, all SNC clusters are mapped to unique NUMA domains. (See also IMC interleaving.) Values for this BIOS option can be:

Xtended Prediciton Table (XPT) Prefetch (Default = Enabled):

This option configures the processor Xtended Prediciton Table (XPT) prefetch feature. The XPT prefetcher exists on top of other prefetchers that that can prefetch data in the core DCU, MLC, and LLC. The XPT prefetcher will issue a speculative DRAM read request in parallel to an LLC lookup. This prefetch bypasses the LLC, saving latency. In some cases, setting this option to disabled can improve performance. In some cases, setting this option to disabled can improve performance. Typically, setting this option to enable provides better performance. This option must be enabled when Sub-NUMA Clustering is enabled. Values for this BIOS option can be:

Uncore Frequency Scaling (Default = Auto):

This option controls the frequency scaling of the processor`s internal buses (the uncore). Values for this BIOS option can be:

Workload Profile (Default = General Power Efficient Compute):

This option allows a user to choose one workload profile that best fits the user`s needs. The workload profiles control many power and performance settings that are relevant to general workload areas. Values for this BIOS option can be:

Minimum Processor Idle Power Core C-State (Default = C6 State):

This option can only be configured if the Workload Profile is set to Custom, or this option is not a dependent value for the Workload Profile. This feature selects the processor's lowest idle power state (C-state) that the operating system uses. The higher the C-state, the lower the power usage of that idle state (C6 is the lowest power idle state supported by the processor). Values for this setting can be:

Minimum Processor Idle Power Package C-State (Default = Package C6 (retention) State):

This option can only be configured if the Workload Profile is set to Custom, or this option is not a dependent value for the Workload Profile. This feature selects the processor's lowest idle package power state (C-state) that is enabled. The processor will automatically transition into the package C-states based on the Core C-states, in which cores on the processor have transitioned. The higher the package C-state, the lower the power usage of that idle package state. Package C6 (retention) is the lowest power idle package state supported by the processor). Values for this setting can be:

Energy/Performance Bias (Default = Balanced Performance):

This option can only be configured if the Workload Profile is set to Custom, or this option is not a dependent value for the Workload Profile. This option configures several processor subsystems to optimize the processor's performance and power usage. Values for this BIOS setting can be:

AHS PCI Logging Level (Default = Verbose Logging):

This option allows the AHS PCI Logging size to be changed. This is a boot time option that should have no effect on run time performance. Values for this BIOS setting can be:

Memory Patrol Scrubbing (Default = Enabled):

This option allows for correction of soft memory errors. Over the length of system runtime, the risk of producing multi-bit and uncorrected errors is reduced with this option. Values for this BIOS setting can be:

Last updated January 9, 2018.


Flag description origin markings:

[user] Indicates that the flag description came from the user flags file.
[suite] Indicates that the flag description came from the suite-wide flags file.
[benchmark] Indicates that the flag description came from a per-benchmark flags file.

The flags files that were used to format this result can be browsed at
http://www.spec.org/cpu2006/flags/Intel-ic17.0-official-linux64-revF.html,
http://www.spec.org/cpu2006/flags/HPE-Platform-Flags-Intel-V1.2-SKX-revH.html.

You can also download the XML flags sources by saving the following links:
http://www.spec.org/cpu2006/flags/Intel-ic17.0-official-linux64-revF.xml,
http://www.spec.org/cpu2006/flags/HPE-Platform-Flags-Intel-V1.2-SKX-revH.xml.


For questions about the meanings of these flags, please contact the tester.
For other inquiries, please contact webmaster@spec.org
Copyright 2006-2018 Standard Performance Evaluation Corporation
Tested with SPEC CPU2006 v1.2.
Report generated on Tue Jan 16 12:09:22 2018 by SPEC CPU2006 flags formatter v6906.