Copyright © 2016 Intel Corporation. All Rights Reserved.
Invoke the Intel oneAPI DPC++ C compiler.
Invoke the Intel oneAPI DPC++ C++ compiler.
Invoke the Intel oneAPI Fortran compiler.
Invoke the Intel oneAPI DPC++ C compiler.
Invoke the Intel oneAPI DPC++ C++ compiler.
Invoke the Intel oneAPI Fortran compiler.
This macro specifies that the target system uses the LP64 data model; specifically, that integers are 32 bits, while longs and pointers are 64 bits.
This macro indicates that the benchmark is being compiled on an AMD64-compatible system running the Linux operating system.
This option is used to indicate that the host system's integers are 32-bits wide, and longs and pointers are 64-bits wide. Not all benchmarks recognize this macro, but the preferred practice for data model selection applies the flags to all benchmarks; this flag description is a placeholder for those benchmarks that do not recognize this macro.
This option is used to indicate that the host system's integers are 32-bits wide, and longs and pointers are 64-bits wide. Not all benchmarks recognize this macro, but the preferred practice for data model selection applies the flags to all benchmarks; this flag description is a placeholder for those benchmarks that do not recognize this macro.
This option is used to indicate that the host system's integers are 32-bits wide, and longs and pointers are 64-bits wide. Not all benchmarks recognize this macro, but the preferred practice for data model selection applies the flags to all benchmarks; this flag description is a placeholder for those benchmarks that do not recognize this macro.
This option is used to indicate that the host system's integers are 32-bits wide, and longs and pointers are 64-bits wide. Not all benchmarks recognize this macro, but the preferred practice for data model selection applies the flags to all benchmarks; this flag description is a placeholder for those benchmarks that do not recognize this macro.
This flag can be set for SPEC compilation for LINUX using default compiler.
This option is used to indicate that the host system's integers are 32-bits wide, and longs and pointers are 64-bits wide. Not all benchmarks recognize this macro, but the preferred practice for data model selection applies the flags to all benchmarks; this flag description is a placeholder for those benchmarks that do not recognize this macro.
This option is used to indicate that the host system's integers are 32-bits wide, and longs and pointers are 64-bits wide. Not all benchmarks recognize this macro, but the preferred practice for data model selection applies the flags to all benchmarks; this flag description is a placeholder for those benchmarks that do not recognize this macro.
This option is used to indicate that the host system's integers are 32-bits wide, and longs and pointers are 64-bits wide. Not all benchmarks recognize this macro, but the preferred practice for data model selection applies the flags to all benchmarks; this flag description is a placeholder for those benchmarks that do not recognize this macro.
This option is used to indicate that the host system's integers are 32-bits wide, and longs and pointers are 64-bits wide. Not all benchmarks recognize this macro, but the preferred practice for data model selection applies the flags to all benchmarks; this flag description is a placeholder for those benchmarks that do not recognize this macro.
This option is used to indicate that the host system's integers are 32-bits wide, and longs and pointers are 64-bits wide. Not all benchmarks recognize this macro, but the preferred practice for data model selection applies the flags to all benchmarks; this flag description is a placeholder for those benchmarks that do not recognize this macro.
This macro specifies that the target system uses the LP64 data model; specifically, that integers are 32 bits, while longs and pointers are 64 bits.
This macro indicates that the benchmark is being compiled on an AMD64-compatible system running the Linux operating system.
This option is used to indicate that the host system's integers are 32-bits wide, and longs and pointers are 64-bits wide. Not all benchmarks recognize this macro, but the preferred practice for data model selection applies the flags to all benchmarks; this flag description is a placeholder for those benchmarks that do not recognize this macro.
This option is used to indicate that the host system's integers are 32-bits wide, and longs and pointers are 64-bits wide. Not all benchmarks recognize this macro, but the preferred practice for data model selection applies the flags to all benchmarks; this flag description is a placeholder for those benchmarks that do not recognize this macro.
This option is used to indicate that the host system's integers are 32-bits wide, and longs and pointers are 64-bits wide. Not all benchmarks recognize this macro, but the preferred practice for data model selection applies the flags to all benchmarks; this flag description is a placeholder for those benchmarks that do not recognize this macro.
This option is used to indicate that the host system's integers are 32-bits wide, and longs and pointers are 64-bits wide. Not all benchmarks recognize this macro, but the preferred practice for data model selection applies the flags to all benchmarks; this flag description is a placeholder for those benchmarks that do not recognize this macro.
This flag can be set for SPEC compilation for LINUX using default compiler.
This option is used to indicate that the host system's integers are 32-bits wide, and longs and pointers are 64-bits wide. Not all benchmarks recognize this macro, but the preferred practice for data model selection applies the flags to all benchmarks; this flag description is a placeholder for those benchmarks that do not recognize this macro.
This option is used to indicate that the host system's integers are 32-bits wide, and longs and pointers are 64-bits wide. Not all benchmarks recognize this macro, but the preferred practice for data model selection applies the flags to all benchmarks; this flag description is a placeholder for those benchmarks that do not recognize this macro.
This option is used to indicate that the host system's integers are 32-bits wide, and longs and pointers are 64-bits wide. Not all benchmarks recognize this macro, but the preferred practice for data model selection applies the flags to all benchmarks; this flag description is a placeholder for those benchmarks that do not recognize this macro.
This option is used to indicate that the host system's integers are 32-bits wide, and longs and pointers are 64-bits wide. Not all benchmarks recognize this macro, but the preferred practice for data model selection applies the flags to all benchmarks; this flag description is a placeholder for those benchmarks that do not recognize this macro.
This option is used to indicate that the host system's integers are 32-bits wide, and longs and pointers are 64-bits wide. Not all benchmarks recognize this macro, but the preferred practice for data model selection applies the flags to all benchmarks; this flag description is a placeholder for those benchmarks that do not recognize this macro.
Compiles for a 64-bit (LP64) data model.
Sets the language dialect to conform to the indicated C standard.
Enable SmartHeap and/or other library usage by forcing the linker to ignore multiple definitions if present
Code is optimized for Intel(R) processors with support for CORE-AVX512 instructions. The resulting code may contain unconditional use of features that are not supported on other processors. This option also enables new optimizations in addition to Intel processor-specific optimizations including advanced data layout and code restructuring optimizations to improve memory accesses for Intel processors.
Do not use this option if you are executing a program on a processor that is not an Intel processor. If you use this option on a non-compatible processor to compile the main program (in Fortran) or the function main() in C/C++, the program will display a fatal run-time error if they are executed on unsupported processors.
Enable O2 optimizations plus more aggressive optimizations, such as prefetching, scalar replacement, and loop and memory access transformations. Enable optimizations for maximum speed, such as:
On IA-32 and Intel EM64T processors, when O3 is used with options -ax or -x (Linux) or with options /Qax or /Qx (Windows), the compiler performs more aggressive data dependency analysis than for O2, which may result in longer compilation times. The O3 optimizations may not cause higher performance unless loop and memory access transformations take place. The optimizations may slow down code in some cases compared to O2 optimizations. The O3 option is recommended for applications that have loops that heavily use floating-point calculations and process large data sets.
Enable fast math mode. This option may yield faster code for programs that do not require the guarantees of exact implementation of IEEE or ISO rules/specifications for math functions.
Performs link time optimizations, which is also known as Interprocedural Optimizations.
Generate floating-point arithmetic for selected unit unit. Here use scalar floating-point instructions present in the SSE instruction set
Tells the compiler the maximum number of times to unroll loops. For example -funroll-loops0 would disable unrolling of loops.
Controls the level of memory layout transformations performed by the compiler. This option can improve cache reuse and cache locality.
Definition of this macro indicates that compilation for parallel operation is enabled, and that any OpenMP directives or pragmas will be visible to the compiler. The behavior of this macro is overridden if -DSPEC_SUPPRESS_OPENMP also appears in the list of compilation flags.
Specify build time link path for jemalloc 64bit built to support the CPU 2017 build. See jemalloc.net for more information.
Linker toggle to specify jemalloc linker library. See jemalloc.net for more information.
Compiles for a 64-bit (LP64) data model.
Enable SmartHeap and/or other library usage by forcing the linker to ignore multiple definitions if present
Code is optimized for Intel(R) processors with support for CORE-AVX512 instructions. The resulting code may contain unconditional use of features that are not supported on other processors. This option also enables new optimizations in addition to Intel processor-specific optimizations including advanced data layout and code restructuring optimizations to improve memory accesses for Intel processors.
Do not use this option if you are executing a program on a processor that is not an Intel processor. If you use this option on a non-compatible processor to compile the main program (in Fortran) or the function main() in C/C++, the program will display a fatal run-time error if they are executed on unsupported processors.
Enable O2 optimizations plus more aggressive optimizations, such as prefetching, scalar replacement, and loop and memory access transformations. Enable optimizations for maximum speed, such as:
On IA-32 and Intel EM64T processors, when O3 is used with options -ax or -x (Linux) or with options /Qax or /Qx (Windows), the compiler performs more aggressive data dependency analysis than for O2, which may result in longer compilation times. The O3 optimizations may not cause higher performance unless loop and memory access transformations take place. The optimizations may slow down code in some cases compared to O2 optimizations. The O3 option is recommended for applications that have loops that heavily use floating-point calculations and process large data sets.
Enable fast math mode. This option may yield faster code for programs that do not require the guarantees of exact implementation of IEEE or ISO rules/specifications for math functions.
Performs link time optimizations, which is also known as Interprocedural Optimizations.
Generate floating-point arithmetic for selected unit unit. Here use scalar floating-point instructions present in the SSE instruction set
Tells the compiler the maximum number of times to unroll loops. For example -funroll-loops0 would disable unrolling of loops.
Controls the level of memory layout transformations performed by the compiler. This option can improve cache reuse and cache locality.
Specify build time link path for jemalloc 64bit built to support the CPU 2017 build. See jemalloc.net for more information.
Linker toggle to specify jemalloc linker library. See jemalloc.net for more information.
Compiles for a 64-bit (LP64) data model.
Enable SmartHeap and/or other library usage by forcing the linker to ignore multiple definitions if present
Code is optimized for Intel(R) processors with support for CORE-AVX512 instructions. The resulting code may contain unconditional use of features that are not supported on other processors. This option also enables new optimizations in addition to Intel processor-specific optimizations including advanced data layout and code restructuring optimizations to improve memory accesses for Intel processors.
Do not use this option if you are executing a program on a processor that is not an Intel processor. If you use this option on a non-compatible processor to compile the main program (in Fortran) or the function main() in C/C++, the program will display a fatal run-time error if they are executed on unsupported processors.
Enable O2 optimizations plus more aggressive optimizations, such as prefetching, scalar replacement, and loop and memory access transformations. Enable optimizations for maximum speed, such as:
On IA-32 and Intel EM64T processors, when O3 is used with options -ax or -x (Linux) or with options /Qax or /Qx (Windows), the compiler performs more aggressive data dependency analysis than for O2, which may result in longer compilation times. The O3 optimizations may not cause higher performance unless loop and memory access transformations take place. The optimizations may slow down code in some cases compared to O2 optimizations. The O3 option is recommended for applications that have loops that heavily use floating-point calculations and process large data sets.
Enable fast math mode. This option may yield faster code for programs that do not require the guarantees of exact implementation of IEEE or ISO rules/specifications for math functions.
Performs link time optimizations, which is also known as Interprocedural Optimizations.
Generate floating-point arithmetic for selected unit unit. Here use scalar floating-point instructions present in the SSE instruction set
Tells the compiler the maximum number of times to unroll loops. For example -funroll-loops0 would disable unrolling of loops.
Controls the level of memory layout transformations performed by the compiler. This option can improve cache reuse and cache locality.
Option standard-realloc-lhs (the default), tells the compiler that when the left-hand side of an assignment is an allocatable object, it should be reallocated to the shape of the right-hand side of the assignment before the assignment occurs. This is the current Fortran Standard definition. This feature may cause extra overhead at run time. This option has the same effect as option assume realloc_lhs.
If you specify nostandard-realloc-lhs, the compiler uses the old Fortran 2003 rules when interpreting assignment statements. The left-hand side is assumed to be allocated with the correct shape to hold the right-hand side. If it is not, incorrect behavior will occur. This option has the same effect as option assume norealloc_lhs.
The align toggle changes how data elements are aligned. Variables and arrays are analyzed and memory layout can be altered. Specifying array32byte will look for opportunities to transform and reailgn arrays to 32byte boundaries.
Specify build time link path for jemalloc 64bit built to support the CPU 2017 build. See jemalloc.net for more information.
Linker toggle to specify jemalloc linker library. See jemalloc.net for more information.
Compiles for a 64-bit (LP64) data model.
Sets the language dialect to conform to the indicated C standard.
Enable SmartHeap and/or other library usage by forcing the linker to ignore multiple definitions if present
Instrument program for profiling for the first phase of two-phase profile guided otimization. This instrumentation gathers information about a program's execution paths and data values but does not gather information from hardware performance counters. The profile instrumentation also gathers data for optimizations which are unique to profile-feedback optimization.
Instructs the compiler to produce a profile-optimized executable and merges available dynamic information.
Code is optimized for Intel(R) processors with support for CORE-AVX512 instructions. The resulting code may contain unconditional use of features that are not supported on other processors. This option also enables new optimizations in addition to Intel processor-specific optimizations including advanced data layout and code restructuring optimizations to improve memory accesses for Intel processors.
Do not use this option if you are executing a program on a processor that is not an Intel processor. If you use this option on a non-compatible processor to compile the main program (in Fortran) or the function main() in C/C++, the program will display a fatal run-time error if they are executed on unsupported processors.
Enable O2 optimizations plus more aggressive optimizations, such as prefetching, scalar replacement, and loop and memory access transformations. Enable optimizations for maximum speed, such as:
On IA-32 and Intel EM64T processors, when O3 is used with options -ax or -x (Linux) or with options /Qax or /Qx (Windows), the compiler performs more aggressive data dependency analysis than for O2, which may result in longer compilation times. The O3 optimizations may not cause higher performance unless loop and memory access transformations take place. The optimizations may slow down code in some cases compared to O2 optimizations. The O3 option is recommended for applications that have loops that heavily use floating-point calculations and process large data sets.
Enable fast math mode. This option may yield faster code for programs that do not require the guarantees of exact implementation of IEEE or ISO rules/specifications for math functions.
Performs link time optimizations, which is also known as Interprocedural Optimizations.
Generate floating-point arithmetic for selected unit unit. Here use scalar floating-point instructions present in the SSE instruction set
Tells the compiler the maximum number of times to unroll loops. For example -funroll-loops0 would disable unrolling of loops.
Controls the level of memory layout transformations performed by the compiler. This option can improve cache reuse and cache locality.
Definition of this macro indicates that compilation for parallel operation is enabled, and that any OpenMP directives or pragmas will be visible to the compiler. The behavior of this macro is overridden if -DSPEC_SUPPRESS_OPENMP also appears in the list of compilation flags.
Tells the compiler to remove the assumption that source code follows c99 signed overflow rules.
Specify build time link path for jemalloc 64bit built to support the CPU 2017 build. See jemalloc.net for more information.
Linker toggle to specify jemalloc linker library. See jemalloc.net for more information.
Compiles for a 64-bit (LP64) data model.
Sets the language dialect to conform to the indicated C standard.
Enable SmartHeap and/or other library usage by forcing the linker to ignore multiple definitions if present
Instrument program for profiling for the first phase of two-phase profile guided otimization. This instrumentation gathers information about a program's execution paths and data values but does not gather information from hardware performance counters. The profile instrumentation also gathers data for optimizations which are unique to profile-feedback optimization.
Instructs the compiler to produce a profile-optimized executable and merges available dynamic information.
Code is optimized for Intel(R) processors with support for CORE-AVX512 instructions. The resulting code may contain unconditional use of features that are not supported on other processors. This option also enables new optimizations in addition to Intel processor-specific optimizations including advanced data layout and code restructuring optimizations to improve memory accesses for Intel processors.
Do not use this option if you are executing a program on a processor that is not an Intel processor. If you use this option on a non-compatible processor to compile the main program (in Fortran) or the function main() in C/C++, the program will display a fatal run-time error if they are executed on unsupported processors.
Enable O2 optimizations plus more aggressive optimizations, such as prefetching, scalar replacement, and loop and memory access transformations. Enable optimizations for maximum speed, such as:
On IA-32 and Intel EM64T processors, when O3 is used with options -ax or -x (Linux) or with options /Qax or /Qx (Windows), the compiler performs more aggressive data dependency analysis than for O2, which may result in longer compilation times. The O3 optimizations may not cause higher performance unless loop and memory access transformations take place. The optimizations may slow down code in some cases compared to O2 optimizations. The O3 option is recommended for applications that have loops that heavily use floating-point calculations and process large data sets.
Enable fast math mode. This option may yield faster code for programs that do not require the guarantees of exact implementation of IEEE or ISO rules/specifications for math functions.
Performs link time optimizations, which is also known as Interprocedural Optimizations.
Generate floating-point arithmetic for selected unit unit. Here use scalar floating-point instructions present in the SSE instruction set
Tells the compiler the maximum number of times to unroll loops. For example -funroll-loops0 would disable unrolling of loops.
Controls the level of memory layout transformations performed by the compiler. This option can improve cache reuse and cache locality.
Definition of this macro indicates that compilation for parallel operation is enabled, and that any OpenMP directives or pragmas will be visible to the compiler. The behavior of this macro is overridden if -DSPEC_SUPPRESS_OPENMP also appears in the list of compilation flags.
Specify build time link path for jemalloc 64bit built to support the CPU 2017 build. See jemalloc.net for more information.
Linker toggle to specify jemalloc linker library. See jemalloc.net for more information.
Compiles for a 64-bit (LP64) data model.
Sets the language dialect to conform to the indicated C standard.
Enable SmartHeap and/or other library usage by forcing the linker to ignore multiple definitions if present
Code is optimized for Intel(R) processors with support for CORE-AVX512 instructions. The resulting code may contain unconditional use of features that are not supported on other processors. This option also enables new optimizations in addition to Intel processor-specific optimizations including advanced data layout and code restructuring optimizations to improve memory accesses for Intel processors.
Do not use this option if you are executing a program on a processor that is not an Intel processor. If you use this option on a non-compatible processor to compile the main program (in Fortran) or the function main() in C/C++, the program will display a fatal run-time error if they are executed on unsupported processors.
Enable O2 optimizations plus more aggressive optimizations, such as prefetching, scalar replacement, and loop and memory access transformations. Enable optimizations for maximum speed, such as:
On IA-32 and Intel EM64T processors, when O3 is used with options -ax or -x (Linux) or with options /Qax or /Qx (Windows), the compiler performs more aggressive data dependency analysis than for O2, which may result in longer compilation times. The O3 optimizations may not cause higher performance unless loop and memory access transformations take place. The optimizations may slow down code in some cases compared to O2 optimizations. The O3 option is recommended for applications that have loops that heavily use floating-point calculations and process large data sets.
Enable fast math mode. This option may yield faster code for programs that do not require the guarantees of exact implementation of IEEE or ISO rules/specifications for math functions.
Performs link time optimizations, which is also known as Interprocedural Optimizations.
Generate floating-point arithmetic for selected unit unit. Here use scalar floating-point instructions present in the SSE instruction set
Tells the compiler the maximum number of times to unroll loops. For example -funroll-loops0 would disable unrolling of loops.
Controls the level of memory layout transformations performed by the compiler. This option can improve cache reuse and cache locality.
Definition of this macro indicates that compilation for parallel operation is enabled, and that any OpenMP directives or pragmas will be visible to the compiler. The behavior of this macro is overridden if -DSPEC_SUPPRESS_OPENMP also appears in the list of compilation flags.
This options tells the compiler to assume no aliasing in the program.
Specify build time link path for jemalloc 64bit built to support the CPU 2017 build. See jemalloc.net for more information.
Linker toggle to specify jemalloc linker library. See jemalloc.net for more information.
This section contains descriptions of flags that were included implicitly by other flags, but which do not have a permanent home at SPEC.
Enable optimizations for speed. This is the generally recommended
optimization level. This option also enables:
- Inlining of intrinsics
- Intra-file interprocedural optimizations, which include:
- inlining
- constant propagation
- forward substitution
- routine attribute propagation
- variable address-taken analysis
- dead static function elimination
- removal of unreferenced variables
- The following capabilities for performance gain:
- constant propagation
- copy propagation
- dead-code elimination
- global register allocation
- global instruction scheduling and control speculation
- loop unrolling
- optimized code selection
- partial redundancy elimination
- strength reduction/induction variable simplification
- variable renaming
- exception handling optimizations
- tail recursions
- peephole optimizations
- structure assignment lowering and optimizations
- dead store elimination
Enable optimizations for speed and disables some optimizations that increase code size and affect speed.
To limit code size, this option:
The O1 option may improve performance for applications with very large code size, many branches, and execution time not dominated by code within loops.
-O1 sets the following options:Tells the compiler the maximum number of times to unroll loops. For example -funroll-loops0 would disable unrolling of loops.
-fno-builtin disables inline expansion for all intrinsic functions.
This option trades off floating-point precision for speed by removing the restriction to conform to the IEEE standard.
EBP is used as a general-purpose register in optimizations.
Places each function in its own COMDAT section.
Flushes denormal results to zero.
OS Tuning
ulimit:
Used to set user limits of system-wide resources. Provides control over resources available to the shell and processes started by it. Some common ulimit commands may include:
Disabling Linux services:
Certain Linux services may be disabled to minimize tasks that may consume CPU cycles.
irqbalance:
Disabled through "service irqbalance stop". Depending on the workload involved, the irqbalance service reassigns various IRQ's to system CPUs. Though this service might help in some situations, disabling it can also help environments which need to minimize or eliminate latency to more quickly respond to events.
Performance Governors (Linux):
In-kernel CPU frequency governors are pre-configured power schemes for the CPU. The CPUfreq governors use P-states to change frequencies and lower power consumption. The dynamic governors can switch between CPU frequencies, based on CPU utilization to allow for power savings while not sacrificing performance.
Other options beside a generic performance governor can be set, such as the perf-bias:
--perf-bias, -b
On supported Intel processors, this option sets a register which allows the cpupower utility (or other software/firmware) to set a policy that controls the relative importance of performance versus energy savings to the processor. The range of valid numbers is 0-15, where 0 is maximum performance and 15 is maximum energy efficiency.
The processor uses this information in model-specific ways when it must select trade-offs between performance and energy efficiency. This policy hint does not supersede Processor Performance states (P-states) or CPU Idle power states (C-states), but allows software to have influence where it would otherwise be unable to express a preference.
On many Linux systems one can set the perf-bias for all CPUs through the cpupower utility with one of the following commands:
Tuning Kernel parameters:
The following Linux Kernel parameters were tuned to better optimize performance of some areas of the system:
tuned-adm:
The tuned-adm is a tool allows selecting/applying different tuning profiles (such as throughput-performance, latency-performance, server-powersave, etc) supported in many Linux distributions.
Transparent Huge Pages (THP):
Transparent Hugepages can be used to increase the memory page size from 4 kilobytes to 2 megabytes. THP provide significant performance advantages on systems with highly contended resources and large memory workloads. If memory utilization is too high or memory is badly fragmented which prevents hugepages being allocated, the kernel will assign smaller 4k pages instead.
Firmware Settings
One or more of the following settings may have been set. If so, the "Platform Notes" section of the report will say so; and you can read below to find out more about what these settings mean.
Intel Hyper-Threading (Default = Enabled):
This feature allows enabling or disabling of logical processor cores on processors supporting Intel Hyper-Threading (HT). When enabled, each physical processor core operates as two logical processor cores. When disabled, each physical core operates as only one logical processor core. Enabling this option can improve overall performance for applications that benefit from a higher processor core count.
Intel Virtualization Technology (Intel VT, VT-x) (Default = Enabled):
When enabled, a hypervisor or operating system supporting this option can use hardware capabilities provided by Intel VT. Some hypervisors require that you enable Intel VT. You can leave this set to enabled even if you are not using a hypervisor or an operating system that uses this option. With default BIOS settings as shipped with most systems, the default state for this setting is Enabled. However, this setting can change it's default setting depending on the Workload Profile that is selected, or what Workload Profile is default for a certain system.
VT-d (Intel VT-d) (Default = Enabled):
If enabled, a hypervisor or operating system supporting this option can use hardware capabilities provided by Intel VT for Directed I/O. You can leave this set to enabled even if you are not using a hypervisor or an operating system that uses this option. With default BIOS settings as shipped with most systems, the default state for this setting is Enabled. However, this setting can change it's default setting depending on the Workload Profile that is selected, or what Workload Profile is default for a certain system.
Processor x2APIC Support (Default = Enabled):
If enabled, x2APIC support enables operating system to run more efficiently on high core count configurations. It also optimizes interrupt distribution in virtualized environments. Setting this option to Enabled is recommended for most cases. When enabled, the operating system can optionally enable x2APCI support when it loads. Older hypervisors and operating systems might have issues with optional x2APIC support, therefore disabling x2APIC could be necessary to address these issues. Setting this option to Enabled also forces Intel VT-D to be enabled.
SR-IOV (Default = Enabled):
If enabled, SR-IOV support enables a hypervisor to create virtual instances of PCI-express device, potentially increasing performance. If enabled, the BIOS allocates additional resources to PCI-express devices. You can leave this option set to Enabled even if you are not using a hypervisor. With default BIOS settings as shipped with most systems, the default state for this setting is Enabled. However, this setting can change it's default setting depending on the Workload Profile that is selected, or what Workload Profile is default for a certain system.
Thermal Configuration (Default = Optimal Cooling):
This feature allows the user to select the fan cooling solution for the system. Values for this BIOS option can be:
Last Level Cache (LLC) Dead Line Allocation (Default = Enabled):
In the Xeon Scalable processor cache scheme, mid-level cache (MLC) evictions are filled into the last level cache (LLC). If a line is evicted from the MLC to the LLC, the core can flag the evicted MLC lines as "dead". This means that the lines are not likely to be read again. This option allows dead lines to be dropped and never fill the LLC if the option is disabled. Values for this BIOS option can be:
Enhanced Processor Performance Profile (Default = Disabled):
Use this option to select a pre-defined enhanced processor performance profiles. Based upon the selection, this feature will adjust the processor settings for improved performance, but may result in higher power consumption. Values for this BIOS option can be:
Stale A to S (Default = Disabled):
The in-memory directory has three states: invalid (I), snoopAll (A), and shared (S). Invalid (I) state means the data is clean and does not exist in any other socket`s cache. The snoopAll (A) state means the data may exist in another socket in exclusive or modified state. Shared (S) state means the data is clean and may be shared across one or more socket`s caches. When doing a read to memory, if the directory line is in the A state we must snoop all the other sockets because another socket may have the line in modified state. If this is the case, the snoop will return the modified data. However, it may be the case that a line is read in A state and all the snoops come back a miss. This can happen if another socket read the line earlier and then silently dropped it from its cache without modifying it. Values for this BIOS option can be:
Stale A to S may be beneficial in a workload where there are many cross-socket reads.
Last Level Cache (LLC) Prefetch (Default = Disabled):
This option configures the processor Last Level Cache (LLC) prefetch feature as a result of the non-inclusive cache architecture. The LLC prefetcher exists on top of other prefetchers that that can prefetch data in the core data cache unit (DCU) and mid-level cache(MLC). In some cases, setting this option to disabled can improve performance. Typically, setting this option to Enabled provides better performance. Values for this BIOS option can be:
NUMA Group Size Optimization (Default = Clustered):
This feature allows the user to configure how the BIOS reports the size of a NUMA node (number of logical processors), which assists the Operating System in grouping processors for application use (referred to as Kgroups). Values for this BIOS option can be:
Sub-NUMA Clustering (SNC) (Default = Enabled):
SNC breaks up the last level cache (LLC) into disjoint clusters based on address range, with each cluster bound to a subset of the memory controllers in the system. SNC improves average latency to the LLC and memory. SNC is a replacement for the cluster on die (COD) feature found in previous processor families. For a multi-socketed system, all SNC clusters are mapped to unique NUMA domains. Values for this BIOS option can be:
DCU Stream Prefetcher (Default = Enabled):
In most environments, leave the option enabled for optimal performance. With certain workloads, disabling it might provide a performance benefit. Do so only after performing application benchmarking to verify improved performance in a particular environment. Values for this BIOS option can be:
Uncore Frequency Scaling (Default = Auto):
This option controls the frequency scaling of the processor`s internal buses (the uncore). Values for this BIOS option can be:
Workload Profile (Default = General Power Efficient Compute):
This option allows a user to choose one workload profile that best fits the user`s needs. The workload profiles control many power and performance settings that are relevant to general workload areas. Values for this BIOS option can be:
Minimum Processor Idle Power Core C-State (Default = C6 State):
This option can only be configured if the Workload Profile is set to Custom, or this option is not a dependent value for the Workload Profile. This feature selects the processor's lowest idle power state (C-state) that the operating system uses. The higher the C-state, the lower the power usage of that idle state (C6 is the lowest power idle state supported by the processor). Values for this setting can be:
Minimum Processor Idle Power Package C-State (Default = Package C6 (retention) State):
This option can only be configured if the Workload Profile is set to Custom, or this option is not a dependent value for the Workload Profile. This feature selects the processor's lowest idle package power state (C-state) that is enabled. The processor will automatically transition into the package C-states based on the Core C-states, in which cores on the processor have transitioned. The higher the package C-state, the lower the power usage of that idle package state. Package C6 (retention) is the lowest power idle package state supported by the processor). Values for this setting can be:
Energy/Performance Bias (Default = Balanced Performance):
This option can only be configured if the Workload Profile is set to Custom, or this option is not a dependent value for the Workload Profile. This option configures several processor subsystems to optimize the processor's performance and power usage. Values for this BIOS setting can be:
Energy Efficient Turbo (Default = Enabled):
This option controls whether the processor uses an energy efficiency based policy when engaging turbo range frequencies. This option is only applicable when Turbo Mode is enabled. Values for this BIOS setting can be: Enabled or Disabled.
Memory Patrol Scrubbing (Default = Enabled):
This option allows for correction of soft memory errors. Over the length of system runtime, the risk of producing multi-bit and uncorrected errors is reduced with this option. Values for this BIOS setting can be:
HW Prefetcher (Default = Enabled):
Use this option to disable the processor HW Prefetch feature. In some cases, setting this option to disabled can improve performance. Typically, setting this option to enabled provides better performance. Only disable this option after performing application benchmarking to verify improved performance in the environment. The HW Prefetcher fetches streams of data and instruction from the memory into the second-level (L2) cache if it determines this data is likely to be required in the near future. The prefetcher is capable of handling multiple streams in either the forward or backward direction. The HW Prefetcher is triggered when successive cache misses occur in the last-level cache and a stride in the access pattern is detected, such as in the case of loop iterations that access array elements. The prefetching occurs up to a page boundary. This option can reduce the latency associated with memory reads. Values for this BIOS setting can be enabled or disabled.
Adjacent Sector Prefetch (Default = Enabled):
Use this option to disable the processor Adjacent Sector Prefetch feature. In some cases, setting this option to disabled can improve performance. Typically, setting this option to enabled provides better performance. Only disable this option after performing application benchmarking to verify improved performance in the environment. The Adjacent Sector Prefetch retrieves both sectors of a cache line when it requires data that isn't currently in the cache. When disabled, the processor will only fetch the sector of the cache line that includes the requested data. Values for this BIOS setting can be enabled or disabled.
Intel UPI Link Enablement (Default = Auto):
Use this option to configure the UPI topology to use fewer links between processors, when available. Changing from the default can reduce UPI bandwidth performance in exchange for less power consumption. Values for this BIOS setting can be: Auto and Single Link Operation.
Intel UPI Link Power Management (Default = Enabled):
Use this option to place the Quick Path Interconnect (UPI) links into a low power state when the links are not being used. This lowers power usage with minimal effect on performance. You can only configure this option if two or more CPUs are present and the Workload Profile is set to Custom. Values for this BIOS setting can be: enabled and disabled.
Intel UPI Link Frequency (Default = Auto):
Use this option to set the UPI Link frequency to a lower speed. Running at a lower frequency can reduce power consumption, but can also affect system performance. You can only configure this option if two or more CPUs are present and the Workload Profile is set to Custom. Values for this BIOS setting can be: Auto and Min UPI Speed.
Direct to UPI (D2K) (Default = Auto):
Provides a performance benefit in multiprocessor configured systems that rely on the UPI bus for memory or I/O accesses. This is due to the Last Level Cache (LLC) reducing the latency due to cache misses. Values for this BIOS setting can be:
Advanced Memory Protection (Default = Advanced ECC Support):
Use this option to configure additional memory protection with ECC (Error Checking and Correcting). Options and support vary per system and configuration. Values for this BIOS setting can be:
Intel DMI Link Frequency (Default = Auto):
Use this option to set the DMI Link frequency to a lower frequency between the processor and PCH. Running at a lower frequency can reduce power consumption, but can also affect system performance. Values for this BIOS setting can be:
Dead Block Predictor (Default = Disabled):
This feature could benefit multi-threaded workloads based upon improved prediction of cache line evictions. Values for this BIOS setting can be Disabled or Enabled.
Last modified Dec 7, 2022.
Flag description origin markings:
For questions about the meanings of these flags, please contact the tester.
For other inquiries, please contact info@spec.org
Copyright 2017-2024 Standard Performance Evaluation Corporation
Tested with SPEC CPU2017 v1.1.9.
Report generated on 2024-01-29 17:27:40 by SPEC CPU2017 flags formatter v5178.