Copyright © 2016 Intel Corporation. All Rights Reserved.
Invoke the Intel C compiler 17.0 for Intel 64 applications
Invoke the Intel C++ compiler 17.0 for Intel 64 applications
This macro specifies that the target system uses the LP64 data model; specifically, that integers are 32 bits, while longs and pointers are 64 bits.
This macro indicates that the benchmark is being compiled on an AMD64-compatible system running the Linux operating system.
This option is used to indicate that the host system's integers are 32-bits wide, and longs and pointers are 64-bits wide. Not all benchmarks recognize this macro, but the preferred practice for data model selection applies the flags to all benchmarks; this flag description is a placeholder for those benchmarks that do not recognize this macro.
This option is used to indicate that the host system's integers are 32-bits wide, and longs and pointers are 64-bits wide. Not all benchmarks recognize this macro, but the preferred practice for data model selection applies the flags to all benchmarks; this flag description is a placeholder for those benchmarks that do not recognize this macro.
This option is used to indicate that the host system's integers are 32-bits wide, and longs and pointers are 64-bits wide. Not all benchmarks recognize this macro, but the preferred practice for data model selection applies the flags to all benchmarks; this flag description is a placeholder for those benchmarks that do not recognize this macro.
This option is used to indicate that the host system's integers are 32-bits wide, and longs and pointers are 64-bits wide. Not all benchmarks recognize this macro, but the preferred practice for data model selection applies the flags to all benchmarks; this flag description is a placeholder for those benchmarks that do not recognize this macro.
This option is used to indicate that the host system's integers are 32-bits wide, and longs and pointers are 64-bits wide. Not all benchmarks recognize this macro, but the preferred practice for data model selection applies the flags to all benchmarks; this flag description is a placeholder for those benchmarks that do not recognize this macro.
This option is used to indicate that the host system's integers are 32-bits wide, and longs and pointers are 64-bits wide. Not all benchmarks recognize this macro, but the preferred practice for data model selection applies the flags to all benchmarks; this flag description is a placeholder for those benchmarks that do not recognize this macro.
This option is used to indicate that the host system's integers are 32-bits wide, and longs and pointers are 64-bits wide. Not all benchmarks recognize this macro, but the preferred practice for data model selection applies the flags to all benchmarks; this flag description is a placeholder for those benchmarks that do not recognize this macro.
Portability changes for Linux
This option is used to indicate that the host system's integers are 32-bits wide, and longs and pointers are 64-bits wide. Not all benchmarks recognize this macro, but the preferred practice for data model selection applies the flags to all benchmarks; this flag description is a placeholder for those benchmarks that do not recognize this macro.
This option is used to indicate that the host system's integers are 32-bits wide, and longs and pointers are 64-bits wide. Not all benchmarks recognize this macro, but the preferred practice for data model selection applies the flags to all benchmarks; this flag description is a placeholder for those benchmarks that do not recognize this macro.
This option is used to indicate that the host system's integers are 32-bits wide, and longs and pointers are 64-bits wide. Not all benchmarks recognize this macro, but the preferred practice for data model selection applies the flags to all benchmarks; this flag description is a placeholder for those benchmarks that do not recognize this macro.
This option is used to indicate that the host system's integers are 32-bits wide, and longs and pointers are 64-bits wide. Not all benchmarks recognize this macro, but the preferred practice for data model selection applies the flags to all benchmarks; this flag description is a placeholder for those benchmarks that do not recognize this macro.
This flag can be set for SPEC compilation for Linux using default compiler.
Code is optimized for Intel(R) processors with support for AVX2 instructions. The resulting code may contain unconditional use of features that are not supported on other processors. This option also enables new optimizations in addition to Intel processor-specific optimizations including advanced data layout and code restructuring optimizations to improve memory accesses for Intel processors.
Do not use this option if you are executing a program on a processor that is not an Intel processor. If you use this option on a non-compatible processor to compile the main program (in Fortran) or the function main() in C/C++, the program will display a fatal run-time error if they are executed on unsupported processors.
Multi-file ip optimizations that includes:
- inline function expansion
- interprocedural constant propogation
- dead code elimination
- propagation of function characteristics
- passing arguments in registers
- loop-invariant code motion
Enables O2 optimizations plus more aggressive optimizations, such as prefetching, scalar replacement, and loop and memory access transformations. Enables optimizations for maximum speed, such as:
On IA-32 and Intel EM64T processors, when O3 is used with options -ax or -x (Linux) or with options /Qax or /Qx (Windows), the compiler performs more aggressive data dependency analysis than for O2, which may result in longer compilation times. The O3 optimizations may not cause higher performance unless loop and memory access transformations take place. The optimizations may slow down code in some cases compared to O2 optimizations. The O3 option is recommended for applications that have loops that heavily use floating-point calculations and process large data sets.
-no-prec-div enables optimizations that give slightly less precise results than full IEEE division.
When you specify -no-prec-div along with some optimizations, such as -xN and -xB (Linux) or /QxN and /QxB (Windows), the compiler may change floating-point division computations into multiplication by the reciprocal of the denominator. For example, A/B is computed as A * (1/B) to improve the speed of the computation.
However, sometimes the value produced by this transformation is not as accurate as full IEEE division. When it is important to have fully precise IEEE division, do not use -no-prec-div. This will enable the default -prec-div and the result will be more accurate, with some loss of performance.
Tells the auto-parallelizer to generate multithreaded code for loops that can be safely executed in parallel. To use this option, you must also specify option O2 or O3. The default numbers of threads spawned is equal to the number of processors detected in the system where the binary is compiled. Can be changed by setting the environment variable OMP_NUM_THREADS
Enable/disable(DEFAULT) the compiler to generate prefetch instructions to prefetch data.
This option instructs the compiler to analyze and transform the program so that 64-bit pointers are shrunk to 32-bit pointers wherever it is legal and safe to do so. In order for this option to be effective, the compiler must optimize using the -ipo option and must be able to analyze all library/external calls the program makes. This option has no effect unless you specify setting SSE3 or higher for option -x.
This option requires that the application cannot exceed a 32-bit address space, otherwise unpredictable results can occur.
Code is optimized for Intel(R) processors with support for AVX2 instructions. The resulting code may contain unconditional use of features that are not supported on other processors. This option also enables new optimizations in addition to Intel processor-specific optimizations including advanced data layout and code restructuring optimizations to improve memory accesses for Intel processors.
Do not use this option if you are executing a program on a processor that is not an Intel processor. If you use this option on a non-compatible processor to compile the main program (in Fortran) or the function main() in C/C++, the program will display a fatal run-time error if they are executed on unsupported processors.
Multi-file ip optimizations that includes:
- inline function expansion
- interprocedural constant propogation
- dead code elimination
- propagation of function characteristics
- passing arguments in registers
- loop-invariant code motion
Enables O2 optimizations plus more aggressive optimizations, such as prefetching, scalar replacement, and loop and memory access transformations. Enables optimizations for maximum speed, such as:
On IA-32 and Intel EM64T processors, when O3 is used with options -ax or -x (Linux) or with options /Qax or /Qx (Windows), the compiler performs more aggressive data dependency analysis than for O2, which may result in longer compilation times. The O3 optimizations may not cause higher performance unless loop and memory access transformations take place. The optimizations may slow down code in some cases compared to O2 optimizations. The O3 option is recommended for applications that have loops that heavily use floating-point calculations and process large data sets.
-no-prec-div enables optimizations that give slightly less precise results than full IEEE division.
When you specify -no-prec-div along with some optimizations, such as -xN and -xB (Linux) or /QxN and /QxB (Windows), the compiler may change floating-point division computations into multiplication by the reciprocal of the denominator. For example, A/B is computed as A * (1/B) to improve the speed of the computation.
However, sometimes the value produced by this transformation is not as accurate as full IEEE division. When it is important to have fully precise IEEE division, do not use -no-prec-div. This will enable the default -prec-div and the result will be more accurate, with some loss of performance.
Enable/disable(DEFAULT) the compiler to generate prefetch instructions to prefetch data.
This option instructs the compiler to analyze and transform the program so that 64-bit pointers are shrunk to 32-bit pointers wherever it is legal and safe to do so. In order for this option to be effective, the compiler must optimize using the -ipo option and must be able to analyze all library/external calls the program makes. This option has no effect unless you specify setting SSE3 or higher for option -x.
This option requires that the application cannot exceed a 32-bit address space, otherwise unpredictable results can occur.
Enable SmartHeap and/or other library usage by forcing the linker to ignore multiple definitions if present
MicroQuill SmartHeap Library (64-bit) available from http://www.microquill.com/
This allows alloca to be set to the compiler's preferred alloca by SPEC rules.
This section contains descriptions of flags that were included implicitly by other flags, but which do not have a permanent home at SPEC.
Enables optimizations for speed. This is the generally recommended
optimization level. This option also enables:
- Inlining of intrinsics
- Intra-file interprocedural optimizations, which include:
- inlining
- constant propagation
- forward substitution
- routine attribute propagation
- variable address-taken analysis
- dead static function elimination
- removal of unreferenced variables
- The following capabilities for performance gain:
- constant propagation
- copy propagation
- dead-code elimination
- global register allocation
- global instruction scheduling and control speculation
- loop unrolling
- optimized code selection
- partial redundancy elimination
- strength reduction/induction variable simplification
- variable renaming
- exception handling optimizations
- tail recursions
- peephole optimizations
- structure assignment lowering and optimizations
- dead store elimination
Enables optimizations for speed and disables some optimizations that increase code size and affect speed.
To limit code size, this option:
The O1 option may improve performance for applications with very large code size, many branches, and execution time not dominated by code within loops.
-O1 sets the following options:Tells the compiler the maximum number of times to unroll loops. For example -funroll-loops0 would disable unrolling of loops.
-fno-builtin disables inline expansion for all intrinsic functions.
This option trades off floating-point precision for speed by removing the restriction to conform to the IEEE standard.
EBP is used as a general-purpose register in optimizations.
Places each function in its own COMDAT section.
Flushes denormal results to zero.
Adjacent Cache Line Prefetch:
This BIOS option allows the enabling/disabling of a processor mechanism to fetch the adjacent cache line within a 128-byte sector that contains the data needed due to a cache line miss. In some cases, setting this option to Disabled may improve performance. Users should only disable this option after performing application benchmarking to verify improved performance in their environment.
C States:
Allows the processor to enter lower power states when idle. When set to Enabled (OS controlled) or when set to Autonomous (if hardware control is supported), the processor can operate in all available Power States to save power, but may increase memory latency and frequency jitter.
C1E:
When set to Enabled, the processor is allowed to switch to minimum performance state when idle. Otherwise, the performance state is at maximum when idle.
Collaborative CPU Performance Control:
Enables/disables the joint OS-System CPU power management control feature.
CPU Performance:
If supported by the CPU, Hardware P States is a performance-per-watt option that relies solely on the CPU to dynamically control individual core frequency.
CPU Power Management:
This BIOS setting allows configuration of various demand-based switching schemes. Maximum Performance maintains full voltage to processor internal components, even during periods of inactivity, eliminating the performance penalty associated with the phase transitions between high and low load.
Data Reuse:
Enabling this BIOS option reduces the frequency of L3 cache updates from L1. This may improve performance by reducing the internal bandwidth consumed by constantly updating L1 cache lines in L3. Since this results in more fetches to main memory, setting this option to Disabled may improve performance in some cases. Users should only disable this option after performing application benchmarking to verify improved performance in their environment.
Energy Efficient Policy:
The CPU uses the setting to manipulate the internal behavior of the processor and determines
whether to target higher performance or better power savings.
Options are Performance, Balanced Performance, Balanced Energy and Energy Efficient.
Energy Efficient Turbo:
EET is a mode of operation where a processor's core frequency is adjusted within the turbo range based on workload.
Execute disable:
This is a security feature designed to prevent certain types of buffer overflow attacks by enforcing specific areas of memory that applications can execute code. In general, it is best to leave this option Enabled for the security benefits, as no real performance advantage has been detected by disabling this feature in BIOS
Hardware Prefetcher:
This BIOS option allows the enabling/disabling of a processor mechanism to prefetch data into the cache according to a pattern-recognition algorithm In some cases, setting this option to Disabled may improve performance. Users should only disable this option after performing application benchmarking to verify improved performance in their environment.
High Bandwidth:
Enabling this option allows the chipset to defer memory transactions and process them out of order for optimal performance.
Logical Processor:
This BIOS setting enables/disables Intel's Hyper-Threading (HT) Technology. With HT Technology, the operating system can execute two threads in parallel within each processor core.
Memory Frequency:
This BIOS setting allows the memory to be clocked to the highest supported frequency.
Memory Patrol Scrub:
Memory Patrol Scrub Patrol Scrubbing is a custom System Profile option feature that scans the memory for bit errors and corrects them whenever possible. When set to Disabled, no patrol scrubbing will occur. When set to Standard Mode, the entire memory array will be scrubbed once in a 24 hour period. When set to Extended Mode, the entire memory array will be scrubbed every hour to further increase system reliability.
Memory Refresh Rate:
Selects the frequency at which the system memory controller performs the DRAM technology data refresh operation.
Monitor/Mwait:
Enables/disables use of the CPU opcodes defined to provide more efficient system software thread synchronization between multiple agents.
Node Interleaving:
This BIOS option allows the enabling/disabling of memory interleaving across CPU nodes. When disabled, each CPU chip can only access memory within its own node.
Snoop Mode:
Allows tuning of memory performances under different memory bandwidths. The optimal Snoop Mode
setting is highly dependent on workload type.
Cluser on Die (COD) is best used for highly NUMA optmizied workloads. This setting offers the best
case local memory latency, but worst case remote latency.
Oppportunistic Snoop Broadcast works well for workloads of mixed NUMA optimization. It offers
good balance of latency and bandwidth.
System Profile:
This BIOS option sets the performance and power management aggressiveness for the system. It is a collection of selections including a custom selection designed to allow customers to choose the ideal operating profile for their server system environment. It includes settings like CPU Power Management, Memory Frequency, Turbo Boost, C1E and C States.
Turbo Boost:
Intel Turbo Boost Technology is a processor feature which allows the processor to transition to a higher frequency than the processor's rate speed if the processor has available power headroom and is within temperature specifications. Disabling this feature will reduce power usage but will reduce the system's maximum achievable performance under some workloads.
Uncore Frequency:
Selects the running frequency of the CPU internal uncore.
Dynamic mode allows the processor to optimizee power resources across the cores and uncore during
runtime. The optimization of the uncore frequency to either save power or optimize performance is
influenced by the setting of the Energy Efficiency Policy.
Virtualization technology:
When this option is set to ENABLED, the BIOS will enable processor Virtualization features and provide the virtualization support to the OS through the DMAR table. In general, only virtualized environments such as VMware ESX, Microsoft Hyper-V, Red Hat KVM, and other virtualized operating systems will take advantage of these features. Disabling this feature is not known to significantly alter the performance or power characteristics of the system, so leaving this option Enabled is advised for most cases.
Flag description origin markings:
For questions about the meanings of these flags, please contact the tester.
For other inquiries, please contact webmaster@spec.org
Copyright 2006-2017 Standard Performance Evaluation Corporation
Tested with SPEC CPU2006 v1.2.
Report generated on Tue Mar 7 16:14:42 2017 by SPEC CPU2006 flags formatter v6906.