CPU2006 Flag Description
Fujitsu PRIMERGY RX2530 M4, Intel Xeon Gold 5120, 2.20GHz

Copyright © 2016 Intel Corporation. All Rights Reserved.


Base Compiler Invocation

C benchmarks

C++ benchmarks


Base Portability Flags

400.perlbench

401.bzip2

403.gcc

429.mcf

445.gobmk

456.hmmer

458.sjeng

462.libquantum

464.h264ref

471.omnetpp

473.astar

483.xalancbmk


Base Optimization Flags

C benchmarks

C++ benchmarks


Base Other Flags

C benchmarks

403.gcc


Implicitly Included Flags

This section contains descriptions of flags that were included implicitly by other flags, but which do not have a permanent home at SPEC.


Commands and Options Used to Submit Benchmark Runs

submit= MYMASK=`printf '0x%x' $((1<<$SPECCOPYNUM))`; /usr/bin/taskset $MYMASK $command
When running multiple copies of benchmarks, the SPEC config file feature submit is used to cause individual jobs to be bound to specific processors. This specific submit command, using taskset, is used for Linux64 systems without numactl.
Here is a brief guide to understanding the specific command which will be found in the config file:
submit= numactl --localalloc --physcpubind=$SPECCOPYNUM $command
When running multiple copies of benchmarks, the SPEC config file feature submit is used to cause individual jobs to be bound to specific processors. This specific submit command is used for Linux64 systems with support for numactl.
Here is a brief guide to understanding the specific command which will be found in the config file:

Shell, Environment, and Other Software Settings

numactl --interleave=all "runspec command"
Launching a process with numactl --interleave=all sets the memory interleave policy so that memory will be allocated using round robin on nodes. When memory cannot be allocated on the current interleave target fall back to other nodes.
KMP_STACKSIZE
Specify stack size to be allocated for each thread.
KMP_AFFINITY
Syntax: KMP_AFFINITY=[<modifier>,...]<type>[,<permute>][,<offset>]
The value for the environment variable KMP_AFFINITY affects how the threads from an auto-parallelized program are scheduled across processors.
It applies to binaries built with -qopenmp and -parallel (Linux and Mac OS X) or /Qopenmp and /Qparallel (Windows).
modifier:
    granularity=fine Causes each OpenMP thread to be bound to a single thread context.
type:
    compact Specifying compact assigns the OpenMP thread <n>+1 to a free thread context as close as possible to the thread context where the <n> OpenMP thread was placed.
    scatter Specifying scatter distributes the threads as evenly as possible across the entire system.
permute: The permute specifier is an integer value controls which levels are most significant when sorting the machine topology map. A value for permute forces the mappings to make the specified number of most significant levels of the sort the least significant, and it inverts the order of significance.
offset: The offset specifier indicates the starting position for thread assignment.

Please see the Thread Affinity Interface article in the Intel Composer XE Documentation for more details.

Example: KMP_AFFINITY=granularity=fine,scatter
Specifying granularity=fine selects the finest granularity level and causes each OpenMP or auto-par thread to be bound to a single thread context.
This ensures that there is only one thread per core on cores supporting HyperThreading Technology
Specifying scatter distributes the threads as evenly as possible across the entire system.
Hence a combination of these two options, will spread the threads evenly across sockets, with one thread per physical core.

Example: KMP_AFFINITY=compact,1,0
Specifying compact will assign the n+1 thread to a free thread context as close as possible to thread n.
A default granularity=core is implied if no granularity is explicitly specified.
Specifying 1,0 sets permute and offset values of the thread assignment.
With a permute value of 1, thread n+1 is assigned to a consecutive core. With an offset of 0, the process's first thread 0 will be assigned to thread 0.
The same behavior is exhibited in a multisocket system.
OMP_NUM_THREADS
Sets the maximum number of threads to use for OpenMP* parallel regions if no other value is specified in the application. This environment variable applies to both -qopenmp and -parallel (Linux and Mac OS X) or /Qopenmp and /Qparallel (Windows). Example syntax on a Linux system with 8 cores: export OMP_NUM_THREADS=8
Set stack size to unlimited
The command "ulimit -s unlimited" is used to set the stack size limit to unlimited.
Free the file system page cache
The command "echo 1> /proc/sys/vm/drop_caches" is used to free up the filesystem page cache.

Red Hat Specific features

Transparent Huge Pages
On RedHat EL 6 and later, Transparent Hugepages increase the memory page size from 4 kilobytes to 2 megabytes. Transparent Hugepages provide significant performance advantages on systems with highly contended resources and large memory workloads. If memory utilization is too high or memory is badly fragmented which prevents hugepages being allocated, the kernel will assign smaller 4k pages instead.
Hugepages are used by default unless the /sys/kernel/mm/redhat_transparent_hugepage/enabled field is changed from its RedHat EL6 default of 'always'.

Firmware / BIOS / Microcode Settings

Enable CPU HWPM(HWPM Support):
This BIOS switch allows 4 options: "Native Mode", "Disabled", "Out of Band Mode" and "Native Mode with No legacy Support". The default is "Native Mode".
With Hardware Power Management(HWPM) the processors provides a flexible interface between Hardware and Platform for performance management and improving energy efficiency.
In Native Mode the HWPM operates cooperatively with the OS via a software interface to provide constraints and hints.
When disabled, system does not use HWPM.
Utilization Profile:
This BIOS switch allows 2 options: "Even" and "Unbalanced". The default is "Even" and the best choice for all workloads utilizing the whole system. In cases where the utilization is highly concentrated on few resources of the system the performance of the application could be improved by setting to "Unbalanced".
Setting this option to "Unbalanced" may improve performance but also increase the power consumption of the system. Users should only select this option after performing application benchmarking to verify improved performance in their environment.
Energy Performance:
This BIOS switch allows 4 options: "Balanced performance", "Performance", "Balanced Energy" and "Energy Efficient". The default is "Balanced Performance" optimized to maximum power savings with minimal impact on performance. "Performance" disables all power management options with any impact on performance. "Balanced Energy" is optimized for power efficiency and "Energy Efficient" for power savings. The BIOS switch is only selectable if the BIOS switch "Power Technology" is set to "Custom".
The two options "Balanced Performance" and "Balanced Energy" should always be the first choice as both options optimize the efficiency of the system. In cases where the performance is not sufficient or the power consumption is too high the two options "Performance" or "Energy Efficient" could be an alternative.
Uncore Frequency Override:
This BIOS switch allows 3 options: "Disabled", "Maximum" and "Nominal". The default is "Disabled" optimized for energy efficiency. "Maximum" sets the uncore frequency to the fixed maximum uncore frequency available. "Nominal" reduces the uncore frequency to the nominal value.
Setting this option to "Maximum" may improve performance but also increase the power consumption of the system. Users should only select this option after performing application benchmarking to verify improved performance in their environment.
CPU C1E Support
Enabling this option which is the default allows the processor to transmit to its minimum frequency when entering the power state C1. If the switch is disabled the CPU stays at its maximum frequency in C1. Because of the increase of power consumption users should only select this option after performing application benchmarking to verify improved performance in their environment.
Link Frequency Select
This switch allows the configuration of the Intel Ultra Path Interconnect (UPI) link speed. Default is auto, which configures the optimal link speed automatically. It can be set "9.6 GT/s", "10.4 GT/s" or "Auto".
Patrol Scrub
This BIOS option enables or disables the so-called memory scrubbing, which cyclically accesses the main memory of the system in the background regardless of the operating system in order to detect and correct memory errors in a preventive way. The time of this memory test cannot be influenced and can under certain circumstances result in losses in performance. The disabling of the Patrol Scrub option increases the probability of discovering memory errors in case of active accesses by the operating system. Until these errors are correctable, the ECC technology of the memory modules ensures that the system continues to run in a stable way. However, too many correctable memory errors increase the risk of discovering non-correctable errors, which then result in a system standstill.
Intel Hyper-Threading Technology
This BIOS option enables or disables additional hardware thread which shares same physical core. Generally "Enabled" is recommended but disabling it makes sense for the application which requires the shortest possible response times. Default setting is "Enabled".
Intel Virtualization Technology
This BIOS option enables or disables additional virtualization functions of the CPU. If the server is not used for virtualization, this option should be set to "Disabled". This can result in energy savings. Default setting is "Enabled".
VT-d
This BIOS option enables or disables I/O virtualization functions of the CPU. If the server is not used for virtualization, this option should be set to "Disabled". Default setting is "Enabled".
Sub NUMA Cluster
Sub NUMA Cluster (SNC) breaks up the last-level cache (LLC) into two disjoint clusters based on address range, with each cluster bound to one memory controller. SNC improves average latency to the LLC/memory and is a replacement for the "Cluster On Die" (COD) feature found in previous processor families. For a multi-socketed system, all SNC clusters are mapped to unique NUMA domains. IMC Interleaving must be set to the correct value to correspond with SNC enable/disable. If SNC and IMC Interleave are both set to Auto, the result will be SNC disabled (only one cluster per socket) with 2-way IMC interleave. If SNC is set to Enable, IMC Interleave should be set to 1-way, which will result in two clusters per socket. The BIOS switch "Sub NUMA Clustering" allows 3 options: "auto", "enabled" and "disabled". The default setting is "enabled" (PRIMERGY servers), "auto" (PRIMEQUEST servers).
IMC Interleaving
This BIOS option controls the interleaving between the Integrated Memory Controllers (IMCs). There are two IMCs per socket in Skylake Server. If IMC Interleaving is set to 2-way, addresses will be interleaved between the two IMCs. If IMC Interleaving is set to 1-way, there will be no interleaving. If SNC is disabled, IMC Interleaving should be set to 2-way. If SNC is enabled, IMC Interleaving should be set to 1-way. Default setting is "Auto".
LLC Dead Line Alloc
This BIOS switch allows 2 options: "Enabled" and "Disabled". The default is "Enabled". In the Skylake non-inclusive cache scheme, the mid-level cache (MLC) evictions are filled into the last-level cache (LLC). When lines are evicted from the MLC, the core can flag them as "dead" (i.e., not likely to be read again). The LLC has the option to drop dead lines and not fill them in the LLC. If the Dead Line LLC Alloc feature is disabled, dead lines will always be dropped and will never fill into the LLC. This can help save space in the LLC and prevent the LLC from evicting useful data. However, if the Dead Line LLC Alloc feature is enabled, the LLC can opportunistically fill dead lines into the LLC if there is free space available.
Stale AtoS (Directory AtoS)
This BIOS switch allows 2 options: "Enabled" and "Disabled". The default is "Disabled".
The in-memory directory has three states: I, A, and S. I (invalid) state means the data is clean and does not exist in any other socket's cache. A (snoopAll) state means the data may exist in another socket in exclusive or modified state. S (Shared) state means the data is clean and may be shared across one or more socket's caches.
When doing a read to memory, if the directory line is in the A state we must snoop all the other sockets because another socket may have the line in modified state. If this is the case, the snoop will return the modified data. However, it may be the case that a line is read in A state and all the snoops come back a miss. This can happen if another socket read the line earlier and then silently dropped it from its cache without modifying it.
If Stale AtoS feature is enabled, in the situation where a line in A state returns only snoop misses, the line will transition to S state. That way, subsequent reads to the line will encounter it in S state and not have to snoop, saving latency and snoop bandwidth. Stale AtoS may be beneficial in a workload where there are many cross-socket reads.
nohz_full
This kernel option sets adaptive tick mode (NOHZ_FULL) to specified processors. Since the number of interrupts is reduced to ones per second, latency-sensitive applications can take advantage of it.
isolcpus
This kernel option excludes a specified processor from load balancing by the kernel scheduler. This prevents the scheduler from scheduling any user-space threads on this processor.
sched_min_granularity_ns
This OS setting controls the minimal preemption granularity for CPU bound tasks. As the number of runnable tasks increases, CFS(Complete Fair Scheduler), the scheduler of the Linux kernel, decreases the timeslices of tasks. If the number of runnable tasks exceeds sched_latency_ns/sched_min_granularity_ns, the timeslice becomes number_of_running_tasks * sched_min_granularity_ns. The default value is 4000000 (ns).
sched_wakeup_granularity_ns
This OS setting controls the wake-up preemption granularity. Increasing this variable reduces wake-up preemption, reducing disturbance of compute bound tasks. Lowering it improves wake-up latency and throughput for latency critical tasks, particularly when a short duty cycle load component must compete with CPU bound components. The default value is 2500000 (ns).
numa_balancing
This OS setting controls automatic NUMA balancing on memory mapping and process placement. Setting 0 disables this feature. It is enabled by default (1).
cpupower frequency-set
cpupower utility is a collection of tools for power efficiency of processor. frequency-set sub-command controls settings for processor frequency. "-g [governor]" specifies a policy to select processor frequency. The performance governor statically sets frequency of the processor cores specified by "-c" option to the highest possible for maximum performance.
cpupower idle-set
idle-set sub-command of cpupower utility controls a processor idle state (C-state) of the kernel. "-d [state_no]>" option disables a specific processor idle state. Disabling idle state can reduce the idle-wakeup delay, but it results in substantially higher power consumption. By default, processor idle states of all CPU cores are set.

Flag description origin markings:

[user] Indicates that the flag description came from the user flags file.
[suite] Indicates that the flag description came from the suite-wide flags file.
[benchmark] Indicates that the flag description came from a per-benchmark flags file.

The flags files that were used to format this result can be browsed at
http://www.spec.org/cpu2006/flags/Intel-ic17.0-official-linux64-revF.html,
http://www.spec.org/cpu2006/flags/Fujitsu-Platform-Settings-V1.2-SKL-RevA.html.

You can also download the XML flags sources by saving the following links:
http://www.spec.org/cpu2006/flags/Intel-ic17.0-official-linux64-revF.xml,
http://www.spec.org/cpu2006/flags/Fujitsu-Platform-Settings-V1.2-SKL-RevA.xml.


For questions about the meanings of these flags, please contact the tester.
For other inquiries, please contact webmaster@spec.org
Copyright 2006-2017 Standard Performance Evaluation Corporation
Tested with SPEC CPU2006 v1.2.
Report generated on Wed Sep 20 13:42:57 2017 by SPEC CPU2006 flags formatter v6906.