Copyright © 2006 Intel Corporation. All Rights Reserved.
Selecting one of the following will take you directly to that section:
Enables optimizations for speed and disables some optimizations that increase code size and affect speed.
To limit code size, this option:
The O1 option may improve performance for applications with very large code size, many branches, and execution time not dominated by code within loops.
-O1 sets the following options:Enables optimizations for speed. This is the generally recommended
optimization level. This option also enables:
- Inlining of intrinsics
- Intra-file interprocedural optimizations, which include:
- inlining
- constant propagation
- forward substitution
- routine attribute propagation
- variable address-taken analysis
- dead static function elimination
- removal of unreferenced variables
- The following capabilities for performance gain:
- constant propagation
- copy propagation
- dead-code elimination
- global register allocation
- global instruction scheduling and control speculation
- loop unrolling
- optimized code selection
- partial redundancy elimination
- strength reduction/induction variable simplification
- variable renaming
- exception handling optimizations
- tail recursions
- peephole optimizations
- structure assignment lowering and optimizations
- dead store elimination
Enables O2 optimizations plus more aggressive optimizations, such as prefetching, scalar replacement, and loop and memory access transformations. Enables optimizations for maximum speed, such as:
On IA-32 and Intel EM64T processors, when O3 is used with options -ax or -x (Linux) or with options /Qax or /Qx (Windows), the compiler performs more aggressive data dependency analysis than for O2, which may result in longer compilation times. The O3 optimizations may not cause higher performance unless loop and memory access transformations take place. The optimizations may slow down code in some cases compared to O2 optimizations. The O3 option is recommended for applications that have loops that heavily use floating-point calculations and process large data sets.
Tells the compiler the maximum number of times to unroll loops. For example -funroll-loops0 would disable unrolling of loops.
-fno-builtin disables inline expansion for all intrinsic functions.
This option trades off floating-point precision for speed by removing the restriction to conform to the IEEE standard.
EBP is used as a general-purpose register in optimizations.
Places each function in its own COMDAT section.
Flushes denormal results to zero.
This option sets the maximum number of times a loop can be unrolled, to n. For example, -unroll1 will unroll loops just once. To disable loop unrolling, use -unroll0. .
The -par-schedule option lets you specify a scheduling algorithm or a tuning method for loop iterations.
It specifies how iterations are to be divided among the threads of the team. This option affects performance
tuning and can provide better performance during auto-parallelization.
This option enables additional interprocedural optimizations for single file compilation. These optimizations are a subset of full intra-file interprocedural optimizations. One of these optimizations enables the compiler to perform inline function expansion for calls to functions defined within the current source file.
Multi-file ip optimizations that includes:
- inline function expansion
- interprocedural constant propagation
- dead code elimination
- propagation of function characteristics
- passing arguments in registers
- loop-invariant code motion
This option instructs the compiler to analyze and transform the program so that 64-bit pointers are shrunk to 32-bit pointers, and 64-bit longs (on Linux) are shrunk into 32-bit longs wherever it is legal and safe to do so. In order for this option to be effective the compiler must be able to optimize using the -ipo/-Qipo option and must be able to analyze all library/external calls the program makes.
This option requires that the size of the program executable never exceeds 2^32 bytes and all data values can be represented within 32 bits. If the program can run correctly in a 32-bit system, these requirements are implicitly satisfied. If the program violates these size restrictions, unpredictable behavior might occur.
-scalar-rep enables scalar replacement performed during loop transformation. To use this option, you must also specify O3. -scalar-rep- disables this optimization.
This options tells the compiler to assume no aliasing in the program.
The -fast option enhances execution speed across the entire program by including the following options that can improve run-time performance:
-O3 (maximum speed and high-level optimizations)
-ipo (enables interprocedural optimizations across files)
-xSSSE3 (generate code specialized for Intel(R) Core(TM)2 Duo processors, Intel(R) Core(TM)2 Quad processors and Intel(R) Xeon(R) processors with SSSE3)
-static Statically link in libraries at link time
-no-prec-div (disable -prec-div) where -prec-div improves precision of FP divides (some speed impact)
To override one of the options set by /fast, specify that option after the -fast option on the command line. The exception is the xT or QxT option which can't be overridden. The options set by /fast may change from release to release.
Compiler option to statically link in libraries at link time
Code is optimized for Intel(R) processors with support for SSE 4.2 instructions. The resulting code may contain unconditional use of features that are not supported on other processors. This option also enables new optimizations in addition to Intel processor-specific optimizations including advanced data layout and code restructuring optimizations to improve memory accesses for Intel processors.
Do not use this option if you are executing a program on a processor that is not an Intel processor. If you use this option on a non-compatible processor to compile the main program (in Fortran) or the function main() in C/C++, the program will display a fatal run-time error if they are executed on unsupported processors.
Code is optimized for Intel(R) processors with support for SSE 4.1 instructions. The resulting code may contain unconditional use of features that are not supported on other processors. This option also enables new optimizations in addition to Intel processor-specific optimizations including advanced data layout and code restructuring optimizations to improve memory accesses for Intel processors.
Do not use this option if you are executing a program on a processor that is not an Intel processor. If you use this option on a non-compatible processor to compile the main program (in Fortran) or the function main() in C/C++, the program will display a fatal run-time error if they are executed on unsupported processors.
Code is optimized for Intel(R) Atom(TM) processors. The resulting code may contain unconditional use of features that are not supported on other processors. This option also enables new optimizations in addition to Intel processor-specific optimizations including advanced data layout and code restructuring optimizations to improve memory accesses for Intel processors.
Do not use this option if you are executing a program on a processor that is not an Intel processor. If you use this option on a non-compatible processor to compile the main program (in Fortran) or the function main() in C/C++, the program will display a fatal run-time error if they are executed on unsupported processors.
Code is optimized for Intel(R) processors with support for SSSE3 instructions. The resulting code may contain unconditional use of features that are not supported on other processors. This option also enables new optimizations in addition to Intel processor-specific optimizations including advanced data layout and code restructuring optimizations to improve memory accesses for Intel processors.
Do not use this option if you are executing a program on a processor that is not an Intel processor. If you use this option on a non-compatible processor to compile the main program (in Fortran) or the function main() in C/C++, the program will display a fatal run-time error if they are executed on unsupported processors.
Code is optimized for Intel Pentium M and compatible Intel processors. The resulting code may contain unconditional use of features that are not supported on other processors. This option also enables new optimizations in addition to Intel processor-specific optimizations including advanced data layout and code restructuring optimizations to improve memory accesses for Intel processors.
Do not use this option if you are executing a program on a processor that is not an Intel processor. If you use this option on a non-compatible processor to compile the main program (in Fortran) or the function main() in C/C++, the program will display a fatal run-time error if they are executed on unsupported processors.
Code is optimized for Intel Pentium 4 and compatible Intel processors; this is the default for Intel?EM64T systems. The resulting code may contain unconditional use of features that are not supported on other processors.
Tells the auto-parallelizer to generate multithreaded code for loops that can be safely executed in parallel. To use this option, you must also specify option O2 or O3. The default numbers of threads spawned is equal to the number of processors detected in the system where the binary is compiled. This can be changed by setting the environment variable OMP_NUM_THREADS
The use of -Qparallel to generate auto-parallelized code requires support libraries that are dynamically linked by default. Specifying libguide.lib on the link line, statically links in libguide.lib to allow auto-parallelized binaries to work on systems which do not have the dynamic version of this library installed.
The use of -Qparallel to generate auto-parallelized code requires spport libraries that are dynamically linked by default. Specifying libguide40.lib on the link line, statically links in libguide40.lib to allow auto-parallelized binaries to work on systems which do not have the dynamic version of this library installed.
Optimizes for Intel Pentium 4 and compatible processors with Streaming SIMD Extensions 2 (SSE2).
-no-prec-div enables optimizations that give slightly less precise results than full IEEE division.
When you specify -no-prec-div along with some optimizations, such as -xN and -xB (Linux) or /QxN and /QxB (Windows), the compiler may change floating-point division computations into multiplication by the reciprocal of the denominator. For example, A/B is computed as A * (1/B) to improve the speed of the computation.
However, sometimes the value produced by this transformation is not as accurate as full IEEE division. When it is important to have fully precise IEEE division, do not use -no-prec-div. This will enable the default -prec-div and the result will be more accurate, with some loss of performance.
Instrument program for profiling for the first phase of two-phase profile guided optimization. This instrumentation gathers information about a program's execution paths and data values but does not gather information from hardware performance counters. The profile instrumentation also gathers data for optimizations which are unique to profile-feedback optimization.
Instructs the compiler to produce a profile-optimized
executable and merges available dynamic information (.dyn)
files into a pgopti.dpi file. If you perform multiple
executions of the instrumented program, -prof-use merges
the dynamic information files again and overwrites the
previous pgopti.dpi file.
Without any other options, the current directory is
searched for .dyn files
Enable SmartHeap and/or other library usage by forcing the linker to ignore multiple definitions if present
MicroQuill SmartHeap Library V8.1 (64-bit) available from http://www.microquill.com/
MicroQuill SmartHeap Library V8.1 (32-bit) available from http://www.microquill.com/
set the stack reserve amount specified to the linker
Enable use of ANSI aliasing rules in optimizations. This option tells the compiler to assume that the program adheres to ISO C Standard aliasability rules.
If your program adheres to these rules, then this option allows the compiler to optimize more aggressively.
If it doesn't adhere to these rules, then it can cause the compiler to generate incorrect code.
Enable/disable(DEFAULT) the compiler to generate prefetch instructions to prefetch data.
Directs the compiler to inline calloc() calls as malloc()/memset()
The compiler adds setup code in the C/C++/Fortran main function to enable optimal malloc algorithms:
The two parameters, M_MMAP_MAX and M_TRIM_THRESHOLD, are described below
Function: int mallopt (int param, int value) When calling mallopt, the param argument specifies the parameter to be set, and value the new value to be set. Possible choices for param, as defined in malloc.h, are:
Enables cache/bandwidth optimization for stores under conditionals (within vector loops) This option tells the compiler to perform a conditional check in a vectorized loop. This checking avoids unnecessary stores and may improve performance by conserving bandwidth.
Enable compiler to generate runtime control code for effective automatic parallelization. This option generates code to perform run-time checks for loops that have symbolic loop bounds. If the granularity of a loop is greater than the parallelization threshold, the loop will be executed in parallel. If you do not specify this option, the compiler may not parallelize loops with symbolic loop bounds if the compile-time granularity estimation of a loop cannot ensure it is beneficial to parallelize the loop.
Select the method that the register allocator uses to partition each routine into regions
Select the method that the register allocator uses to partition each routine into regions
Multi-versioning is used for generating different versions of the loop based on run time dependence testing, alignment and checking for short/long trip counts. If this option is turned on, it will trigger more versioning at the expense of creating more overhead to check for pointer aliasing and scalar replacement.
Make all local variables AUTOMATIC. Same as -automatic
Enables more aggressive unrolling heuristics
Specifies whether streaming stores are generated:
always - enables generation of streaming stores under the assumption that the application is memory bound
auto - compiler decides when streaming stores are used (DEFAULT)
never - disables generation of streaming stores
Disables inline expansion of all intrinsic functions.
Disables conformance to the ANSI C and IEEE 754 standards for floating-point arithmetic.
Allows use of EBP as a general-purpose register in optimizations.
This option enables most speed optimizations, but disables some that increase code size for a small speed benefit.
This option enables global optimizations.
Specifies the level of inline function expansion.
Ob0 - Disables inlining of user-defined functions. Note that statement functions are always inlined.
Ob1 - Enables inlining when an inline keyword or an inline attribute is specified. Also enables inlining according to the C++ language.
Ob2 - Enables inlining of any function at the compiler's discretion.
This option tells the compiler to separate functions into COMDATs for the linker.
This option enables read only string-pooling optimization.
This option enables read/write string-pooling optimization.
This option disables stack-checking for routines with 4096 bytes of local variables and compiler temporaries.
For mixed-language benchmarks, tell the compiler to convert routine names to lowercase for compatibility
For mixed-language benchmarks, tell the compiler to assume that routine names end with an underscore
Tell the compiler to treat source files as C++ regardless of the file extension
This option specifies that the main program is not written in Fortran. It is a link-time option that prevents the compiler from linking for_main.o into applications.
For example, if the main program is written in C and calls a Fortran subprogram, specify -nofor-main when compiling the program with the ifort command. If you omit this option, the main program must be a Fortran program.
Invoke the Intel C compiler 11.1 for Intel 64 applications
You need binutils 2.16.91.0.7 or later with this compiler to support new instructions on Intel Core 2 processors
Invoke the Intel C++ compiler 11.1 for Intel 64 applications
You need binutils 2.16.91.0.7 or later with this compiler to support new instructions on Intel Core 2 processors
Invoke the Intel Fortran compiler 11.1 for Intel 64 applications.
You need binutils 2.16.91.0.7 or later with this compiler to support new instructions on Intel Core 2 processors
Invoke the Intel C compiler 11.1 for IA32 applications
You need binutils 2.16.91.0.7 or later with this compiler to support new instructions on Intel Core 2 processors
Invoke the Intel C++ compiler 11.1 for IA32 applications.
You need binutils 2.16.91.0.7 or later with this compiler to support new instructions on Intel Core 2 processors
Invoke the Intel Fortran compiler 11.1 for IA32 applications.
You need binutils 2.16.91.0.7 or later with this compiler to support new instructions on Intel Core 2 processors
Platform settings
One or more of the following settings may have been set. If so, the "General Notes" section of the report will say so; and you can read below to find out more about what these settings mean.
KMP_STACKSIZE
Specify stack size to be allocated for each thread.
KMP_AFFINITY
KMP_AFFINITY = < physical | logical >, starting-core-id
specifies the static mapping of user threads to physical cores. For example,
if you have a system configured with 8 cores, OMP_NUM_THREADS=8 and
KMP_AFFINITY=physical,0 then thread 0 will mapped to core 0, thread 1 will be mapped to core 1, and
so on in a round-robin fashion.
KMP_AFFINITY = granularity=fine,scatter
The value for the environment variable KMP_AFFINITY affects how the threads from an auto-parallelized program are scheduled across processors.
Specifying granularity=fine selects the finest granularity level, causes each OpenMP thread to be bound to a single thread context.
This ensures that there is only one thread per core on cores supporting HyperThreading Technology
Specifying scatter distributes the threads as evenly as possible across the entire system.
Hence a combination of these two options, will spread the threads evenly across sockets, with one thread per physical core.
OMP_NUM_THREADS
Sets the maximum number of threads to use for OpenMP* parallel regions if no other value is specified in the application. This environment variable applies to both -openmp and -parallel (Linux and Mac OS X) or /Qopenmp and /Qparallel (Windows). Example syntax on a Linux system with 8 cores: export OMP_NUM_THREADS=8
Hardware Prefetch:
This BIOS option allows the enabling/disabling of a processor mechanism to prefetch data into the cache according to a pattern-recognition algorithm.
In some cases, setting this option to Disabled may improve performance. Users should only disable this option after performing application benchmarking to verify improved performance in their environment.
Adjacent Sector Prefetch:
This BIOS option allows the enabling/disabling of a processor mechanism to fetch the adjacent cache line within a 128-byte sector that contains the data needed due to a cache line miss.
In some cases, setting this option to Disabled may improve performance. Users should only disable this option after performing application benchmarking to verify improved performance in their environment.
ulimit -s <n>
Sets the stack size to n kbytes, or unlimited to allow the stack size to grow without limit.
mkdir /dev/cpuset; mount -t cpuset none /dev/cpuset; echo 1 > /dev/cpuset/memory_spread_page
There are two boolean flag files per cpuset that control where the kernel allocates pages for the file system buffers and related in kernel data structures. They are called 'memory_spread_page' and 'memory_spread_slab'. If the per-cpuset boolean flag file 'memory_spread_page' is set, then the kernel will spread the file system buffers (page cache) evenly over all the nodes that the faulting task is allowed to use, instead of preferring to put those pages on the node where the task is running. If the per-cpuset boolean flag file 'memory_spread_slab' is set, then the kernel will spread some file system related slab caches, such as for inodes and dentries evenly over all the nodes that the faulting task is allowed to use, instead of preferring to put those pages on the node where the task is running. The setting of these flags does not affect anonymous data segment or stack segment pages of a task.
By default, both kinds of memory spreading are off, and memory pages are allocated on the node local to where the task is running, except perhaps as modified by the tasks NUMA mempolicy or cpuset configuration, so long as sufficient free memory pages are available. When new cpusets are created, they inherit the memory spread settings of their parent. Setting memory spreading causes allocations for the affected page or slab caches to ignore the tasks NUMA mempolicy and be spread instead. Tasks using mbind() or set_mempolicy() calls to set NUMA mempolicies will not notice any change in these calls as a result of their containing tasks memory spread settings. If memory spreading is turned off, then the currently specified NUMA mempolicy once again applies to memory page allocations. Both 'memory_spread_page' and 'memory_spread_slab' are boolean flag files. By default they contain "0", meaning that the feature is off for that cpuset. If a "1" is written to that file, then that turns the named feature on.
Using numactl to bind processes and memory to cores
For multi-copy runs or single copy runs on systems with multiple sockets, it is advantageous to bind a process to a particular core. Otherwise, the OS may arbitrarily move your process from one core to another. This can affect performance. To help, SPEC allows the use of a "submit" command where users can specify a utility to use to bind processes. We have found the utility 'numactl' to be the best choice.
numactl runs processes with a specific NUMA scheduling or memory placement policy. The policy is set for a command and inherited by all of its children. The numactl flag "--physcpubind" specifies which core(s) to bind the process. "-l" instructs numactl to keep a process memory on the local node while "-m" specifies which node(s) to place a process memory. For full details on using numactl, please refer to your Linux documentation, 'man numactl'