clang is a C compiler which encompasses preprocessing, parsing, optimization, code generation, assembly, and linking. Depending on which high-level mode setting is passed, Clang will stop before doing a full link.
clang++ C++ compiler which encompasses preprocessing, parsing, optimization, code generation, assembly, and linking. Depending on which high-level mode setting is passed, Clang will stop before doing a full link.
flang is a Fortran compiler which encompasses parsing, optimization, code generation, assembly, and linking. Depending on which high-level mode setting is passed, Flang will stop before doing a full link.
flang is a Fortran compiler which encompasses parsing, optimization, code generation, assembly, and linking. Depending on which high-level mode setting is passed, Flang will stop before doing a full link.
clang is a C compiler which encompasses preprocessing, parsing, optimization, code generation, assembly, and linking. Depending on which high-level mode setting is passed, Clang will stop before doing a full link.
clang++ C++ compiler which encompasses preprocessing, parsing, optimization, code generation, assembly, and linking. Depending on which high-level mode setting is passed, Clang will stop before doing a full link.
clang is a C compiler which encompasses preprocessing, parsing, optimization, code generation, assembly, and linking. Depending on which high-level mode setting is passed, Clang will stop before doing a full link.
clang++ C++ compiler which encompasses preprocessing, parsing, optimization, code generation, assembly, and linking. Depending on which high-level mode setting is passed, Clang will stop before doing a full link.
clang is a C compiler which encompasses preprocessing, parsing, optimization, code generation, assembly, and linking. Depending on which high-level mode setting is passed, Clang will stop before doing a full link.
flang is a Fortran compiler which encompasses parsing, optimization, code generation, assembly, and linking. Depending on which high-level mode setting is passed, Flang will stop before doing a full link.
clang is a C compiler which encompasses preprocessing, parsing, optimization, code generation, assembly, and linking. Depending on which high-level mode setting is passed, Clang will stop before doing a full link.
clang++ C++ compiler which encompasses preprocessing, parsing, optimization, code generation, assembly, and linking. Depending on which high-level mode setting is passed, Clang will stop before doing a full link.
flang is a Fortran compiler which encompasses parsing, optimization, code generation, assembly, and linking. Depending on which high-level mode setting is passed, Flang will stop before doing a full link.
flang is a Fortran compiler which encompasses parsing, optimization, code generation, assembly, and linking. Depending on which high-level mode setting is passed, Flang will stop before doing a full link.
clang is a C compiler which encompasses preprocessing, parsing, optimization, code generation, assembly, and linking. Depending on which high-level mode setting is passed, Clang will stop before doing a full link.
clang++ C++ compiler which encompasses preprocessing, parsing, optimization, code generation, assembly, and linking. Depending on which high-level mode setting is passed, Clang will stop before doing a full link.
clang is a C compiler which encompasses preprocessing, parsing, optimization, code generation, assembly, and linking. Depending on which high-level mode setting is passed, Clang will stop before doing a full link.
clang++ C++ compiler which encompasses preprocessing, parsing, optimization, code generation, assembly, and linking. Depending on which high-level mode setting is passed, Clang will stop before doing a full link.
clang is a C compiler which encompasses preprocessing, parsing, optimization, code generation, assembly, and linking. Depending on which high-level mode setting is passed, Clang will stop before doing a full link.
flang is a Fortran compiler which encompasses parsing, optimization, code generation, assembly, and linking. Depending on which high-level mode setting is passed, Flang will stop before doing a full link.
This option is used to indicate that the host system's integers are 32-bits wide, and longs and pointers are 64-bits wide. Not all benchmarks recognize this macro, but the preferred practice for data model selection applies the flags to all benchmarks; this flag description is a placeholder for those benchmarks that do not recognize this macro.
This option is used to indicate that the host system's integers are 32-bits wide, and longs and pointers are 64-bits wide. Not all benchmarks recognize this macro, but the preferred practice for data model selection applies the flags to all benchmarks; this flag description is a placeholder for those benchmarks that do not recognize this macro.
This option is used to indicate that the host system's integers are 32-bits wide, and longs and pointers are 64-bits wide. Not all benchmarks recognize this macro, but the preferred practice for data model selection applies the flags to all benchmarks; this flag description is a placeholder for those benchmarks that do not recognize this macro.
This option is used to indicate that the host system's integers are 32-bits wide, and longs and pointers are 64-bits wide. Not all benchmarks recognize this macro, but the preferred practice for data model selection applies the flags to all benchmarks; this flag description is a placeholder for those benchmarks that do not recognize this macro.
This option is used to indicate that the host system's integers are 32-bits wide, and longs and pointers are 64-bits wide. Not all benchmarks recognize this macro, but the preferred practice for data model selection applies the flags to all benchmarks; this flag description is a placeholder for those benchmarks that do not recognize this macro.
This option is used to indicate that the host system's integers are 32-bits wide, and longs and pointers are 64-bits wide. Not all benchmarks recognize this macro, but the preferred practice for data model selection applies the flags to all benchmarks; this flag description is a placeholder for those benchmarks that do not recognize this macro.
This macro indicates that Fortran functions called from C should have their names lower-cased.
The binary datasets for some of the Fortran benchmarks in the SPEC CPU suites are stored in big-endian format. This option is necessary for those datasets to be read in correctly.
This option is used to indicate that the host system's integers are 32-bits wide, and longs and pointers are 64-bits wide. Not all benchmarks recognize this macro, but the preferred practice for data model selection applies the flags to all benchmarks; this flag description is a placeholder for those benchmarks that do not recognize this macro.
This option instructs the compiler to treat char type as unsigned.
Some systems need to see alternate definitions for boolean types. This flag enables their use.
This option is used to indicate that the host system's integers are 32-bits wide, and longs and pointers are 64-bits wide. Not all benchmarks recognize this macro, but the preferred practice for data model selection applies the flags to all benchmarks; this flag description is a placeholder for those benchmarks that do not recognize this macro.
Fortran to C symbol naming. C symbol names are lower case with one underscore. _symbol
This option is used to indicate that the host system's integers are 32-bits wide, and longs and pointers are 64-bits wide. Not all benchmarks recognize this macro, but the preferred practice for data model selection applies the flags to all benchmarks; this flag description is a placeholder for those benchmarks that do not recognize this macro.
This option is used to indicate that the host system's integers are 32-bits wide, and longs and pointers are 64-bits wide. Not all benchmarks recognize this macro, but the preferred practice for data model selection applies the flags to all benchmarks; this flag description is a placeholder for those benchmarks that do not recognize this macro.
This option is used to indicate that the host system's integers are 32-bits wide, and longs and pointers are 64-bits wide. Not all benchmarks recognize this macro, but the preferred practice for data model selection applies the flags to all benchmarks; this flag description is a placeholder for those benchmarks that do not recognize this macro.
This option is used to indicate that the host system's integers are 32-bits wide, and longs and pointers are 64-bits wide. Not all benchmarks recognize this macro, but the preferred practice for data model selection applies the flags to all benchmarks; this flag description is a placeholder for those benchmarks that do not recognize this macro.
This option is used to indicate that the host system's integers are 32-bits wide, and longs and pointers are 64-bits wide. Not all benchmarks recognize this macro, but the preferred practice for data model selection applies the flags to all benchmarks; this flag description is a placeholder for those benchmarks that do not recognize this macro.
This option is used to indicate that the host system's integers are 32-bits wide, and longs and pointers are 64-bits wide. Not all benchmarks recognize this macro, but the preferred practice for data model selection applies the flags to all benchmarks; this flag description is a placeholder for those benchmarks that do not recognize this macro.
This option is used to indicate that the host system's integers are 32-bits wide, and longs and pointers are 64-bits wide. Not all benchmarks recognize this macro, but the preferred practice for data model selection applies the flags to all benchmarks; this flag description is a placeholder for those benchmarks that do not recognize this macro.
This option is used to indicate that the host system's integers are 32-bits wide, and longs and pointers are 64-bits wide. Not all benchmarks recognize this macro, but the preferred practice for data model selection applies the flags to all benchmarks; this flag description is a placeholder for those benchmarks that do not recognize this macro.
This option is used to indicate that the host system's integers are 32-bits wide, and longs and pointers are 64-bits wide. Not all benchmarks recognize this macro, but the preferred practice for data model selection applies the flags to all benchmarks; this flag description is a placeholder for those benchmarks that do not recognize this macro.
This option is used to indicate that the host system's integers are 32-bits wide, and longs and pointers are 64-bits wide. Not all benchmarks recognize this macro, but the preferred practice for data model selection applies the flags to all benchmarks; this flag description is a placeholder for those benchmarks that do not recognize this macro.
This option is used to indicate that the host system's integers are 32-bits wide, and longs and pointers are 64-bits wide. Not all benchmarks recognize this macro, but the preferred practice for data model selection applies the flags to all benchmarks; this flag description is a placeholder for those benchmarks that do not recognize this macro.
This macro indicates that Fortran functions called from C should have their names lower-cased.
The binary datasets for some of the Fortran benchmarks in the SPEC CPU suites are stored in big-endian format. This option is necessary for those datasets to be read in correctly.
This option is used to indicate that the host system's integers are 32-bits wide, and longs and pointers are 64-bits wide. Not all benchmarks recognize this macro, but the preferred practice for data model selection applies the flags to all benchmarks; this flag description is a placeholder for those benchmarks that do not recognize this macro.
This option instructs the compiler to treat char type as unsigned.
Some systems need to see alternate definitions for boolean types. This flag enables their use.
This option is used to indicate that the host system's integers are 32-bits wide, and longs and pointers are 64-bits wide. Not all benchmarks recognize this macro, but the preferred practice for data model selection applies the flags to all benchmarks; this flag description is a placeholder for those benchmarks that do not recognize this macro.
Fortran to C symbol naming. C symbol names are lower case with one underscore. _symbol
This option is used to indicate that the host system's integers are 32-bits wide, and longs and pointers are 64-bits wide. Not all benchmarks recognize this macro, but the preferred practice for data model selection applies the flags to all benchmarks; this flag description is a placeholder for those benchmarks that do not recognize this macro.
This option is used to indicate that the host system's integers are 32-bits wide, and longs and pointers are 64-bits wide. Not all benchmarks recognize this macro, but the preferred practice for data model selection applies the flags to all benchmarks; this flag description is a placeholder for those benchmarks that do not recognize this macro.
This option is used to indicate that the host system's integers are 32-bits wide, and longs and pointers are 64-bits wide. Not all benchmarks recognize this macro, but the preferred practice for data model selection applies the flags to all benchmarks; this flag description is a placeholder for those benchmarks that do not recognize this macro.
This option is used to indicate that the host system's integers are 32-bits wide, and longs and pointers are 64-bits wide. Not all benchmarks recognize this macro, but the preferred practice for data model selection applies the flags to all benchmarks; this flag description is a placeholder for those benchmarks that do not recognize this macro.
This option is used to indicate that the host system's integers are 32-bits wide, and longs and pointers are 64-bits wide. Not all benchmarks recognize this macro, but the preferred practice for data model selection applies the flags to all benchmarks; this flag description is a placeholder for those benchmarks that do not recognize this macro.
Generate code for a 64-bit environment. The 64-bit environment sets int to 32 bits and long and pointer to 64 bits and generates code for AMD's x86-64 architecture. The compiler generates AMD64, INTEL64, x86-64 64-bit ABI. The default on a 32-bit host is 32-bit ABI. The default on a 64-bit host is 64-bit ABI if the target platform specified is 64-bit, otherwise the default is 32-bit.
Generate output files in LLVM formats suitable for link time optimization. When used with -S this generates LLVM intermediate language assembly files, otherwise this generates LLVM bitcode format object files (which may be passed to the linker depending on the stage selection options).
This flag enables vectorization of loops with complex control flow that can not be vectorized by loop and slp vectorizers.
This option enables an optimization that generates and calls specialized function versions when they are called with constant arguments. This optimization helps in function inlining.
Force the alignment of all blocks that have no fall-through predecessors (i.e. don't add nops that are executed). In log2 format (e.g 4 means align on 16B boundaries).
This option eliminates the array computations based on their usage. The computations on unused array elements and computations on zero valued array elements are eliminated with this optimization. -flto as whole program analysis is required to perform this optimization.
Possible values:
This option enables the classical loop fusion transformation where the bodies of multiple loop nests are fused into one loop nest. The transformation checks various legality criteria involving the bounds of the loop nests involved, the control flow nesting of the loop nests and so on.
Loop fusion enables reuse of memory access operations across the loop nests and is also beneficial for cache performance. As part of the profitability check for this transformation it uses code size thresholds which control the size of the fused loop body created.
This transformation is off by default and may be enabled by using this option.
Like -O2, except that it enables optimizations that take longer to perform or that may generate larger code (in an attempt to make the program run faster).
If multiple "O" options are used, with or without level numbers, the last such option is the one that is effective.
Specify that Clang should generate code for a specific processor family member and later. For example, if you specify -march=znver1, the compiler is allowed to generate instructions that are valid on AMD Zen processors, but which may not exist on earlier products.
Use the given vector functions library.
Enables a range of optimizations that provide faster, though sometimes less precise, mathematical operations that may not conform to the IEEE-754 specifications. When this option is specified, the __STDC_IEC_559__ macro is ignored even if set by the system headers.
Analyzes the whole program to determine if the structures in the code can be peeled and if pointer or integer fields in the structure can be compressed. If feasible, this optimization transforms the code to enable these improvements. This transformation is likely to improve cache utilization and memory bandwidth. This, in turn, is expected to improve the scalability of programs executed on multiple cores.
This is effective only under -flto as whole program analysis is required to perform this optimization. You can choose different levels of aggressiveness with which this optimization can be applied to your application with 1 being the least aggressive and 7 being the most aggressive level.
Possible values:
Note:
fstruct-layout=4 and fstruct-layout=5 are derived from fstruct-layout=2 and fstruct-layout=3 respectively with the added feature of safe compression of integer fields in structures. Going from fstruct-layout=4 to fstruct-layout=5 may result in higher performance if the pointer values are such that the pointers can be compressed to 16-bits.
fstruct-layout=6 and fstruct-layout=7 are derived from fstruct-layout=2 and fstruct-layout=3 respectively with the added feature of compression of integer fields in structures. These are similar to fstruct-layout=4 and fstruct-layout=5, but here, the integer fields of the structures are always compressed from 64-bits to 32-bits without any safety guarantee.
Sets the limit at which loops will be unrolled. For example, if unroll threshold is set to 100 then only loops with 100 or fewer instructions will be unrolled.
Sets the compiler's inlining threshold level to the value passed as the argument. The inline threshold is used in the inliner heuristics to decide which functions should be inlined.
This option enables an optimization that transforms the data layout of a single dimensional array to provide better cache locality by analysing the access patterns.
This option enables an optimization that generates and calls specialized function versions when they are called with constant arguments. This optimization helps in function inlining.
This option enables an optimization that generates and calls specialized function versions when the loops inside function are vectorizable and the arguments are not aliased with each other. This optimization helps in function inlining and vectorization.
This option enables the GVN hoist pass, which is used to hoist computations from branches.
This option enables an optimization that does the slp vectorization across basic blocks. The SLP vectorizer vectorizes instructions within basic blocks. The global slp vectorizer analyzes instructions across basic blocks and vectorizes them.
Enables estimation of the virtual register pressure before performing loop invariant code motion. This estimation is used to decide the invariants that will be hoisted during loop invariant code motion.
This option eliminates the array computations based on their usage. The computations on unused array elements and computations on zero valued array elements are eliminated with this optimization. -flto as whole program analysis is required to perform this optimization.
Possible values:
This option enables the classical loop fusion transformation where the bodies of multiple loop nests are fused into one loop nest. The transformation checks various legality criteria involving the bounds of the loop nests involved, the control flow nesting of the loop nests and so on.
Loop fusion enables reuse of memory access operations across the loop nests and is also beneficial for cache performance. As part of the profitability check for this transformation it uses code size thresholds which control the size of the fused loop body created.
This transformation is off by default and may be enabled by using this option.
Instructs the linker to use the first definition encountered for a symbol, and ignore all others.
Instructs the compiler to link with AMD-supported optimized math library.
Use the jemalloc library, which is a general purpose malloc(3) implementation that emphasizes fragmentation avoidance and scalable concurrency support.
Instructs the compiler to link with flang Fortran runtime libraries.
Generate code for a 64-bit environment. The 64-bit environment sets int to 32 bits and long and pointer to 64 bits and generates code for AMD's x86-64 architecture. The compiler generates AMD64, INTEL64, x86-64 64-bit ABI. The default on a 32-bit host is 32-bit ABI. The default on a 64-bit host is 64-bit ABI if the target platform specified is 64-bit, otherwise the default is 32-bit.
Selects the C++ language dialect.
This option disables generation of the adx instruction.
This option disables the generation of SSE4a instructions.
This option controls whether AOCC emits (true) or does not emit (false) a vzeroupper instruction before a transfer of control flow. Not emitting the vzeroupper instruction can help minimize the AVX to SSE transition penalty.
Generate output files in LLVM formats suitable for link time optimization. When used with -S this generates LLVM intermediate language assembly files, otherwise this generates LLVM bitcode format object files (which may be passed to the linker depending on the stage selection options).
This flag enables vectorization of loops with complex control flow that can not be vectorized by loop and slp vectorizers.
This option enables an optimization that generates and calls specialized function versions when they are called with constant arguments. This optimization helps in function inlining.
Force the alignment of all blocks that have no fall-through predecessors (i.e. don't add nops that are executed). In log2 format (e.g 4 means align on 16B boundaries).
This option eliminates the array computations based on their usage. The computations on unused array elements and computations on zero valued array elements are eliminated with this optimization. -flto as whole program analysis is required to perform this optimization.
Possible values:
This option enables the classical loop fusion transformation where the bodies of multiple loop nests are fused into one loop nest. The transformation checks various legality criteria involving the bounds of the loop nests involved, the control flow nesting of the loop nests and so on.
Loop fusion enables reuse of memory access operations across the loop nests and is also beneficial for cache performance. As part of the profitability check for this transformation it uses code size thresholds which control the size of the fused loop body created.
This transformation is off by default and may be enabled by using this option.
Like -O2, except that it enables optimizations that take longer to perform or that may generate larger code (in an attempt to make the program run faster).
If multiple "O" options are used, with or without level numbers, the last such option is the one that is effective.
Specify that Clang should generate code for a specific processor family member and later. For example, if you specify -march=znver1, the compiler is allowed to generate instructions that are valid on AMD Zen processors, but which may not exist on earlier products.
Use the given vector functions library.
Enables a range of optimizations that provide faster, though sometimes less precise, mathematical operations that may not conform to the IEEE-754 specifications. When this option is specified, the __STDC_IEC_559__ macro is ignored even if set by the system headers.
This optimization does partial unswitching of loops where some part of the unswitched control flow remains in the loop.
Sets the limit at which loops will be unrolled. For example, if unroll threshold is set to 100 then only loops with 100 or fewer instructions will be unrolled.
Sets the compiler's inlining heuristics to an aggressive level by increasing the inline thresholds.
This option enables an optimization that generates and calls specialized function versions when the loops inside function are vectorizable and the arguments are not aliased with each other. This optimization helps in function inlining and vectorization.
Sets the limit at which loops will be unswitched. For example, if unswitch threshold is set to 100 then only loops with 100 or fewer instructions will be unswtched.
Run the loop rerolling pass.
This option enables aggressive loop unswitching heuristic (including -enable-partial-unswitch) based on the usage of the branch conditional values. Loop unswitching leads to code-bloat. Code-bloat can be minimized if the hoisted condition is executed more often. This heuristic prioritizes the conditions based on the number of times they are used within the loop. The heuristic can be controlled with the following options:
Enables unswitching of a loop with respect to a branch conditional value (B), where B appears in at least <n> compares in the loop. This option is enabled with -aggressive-loop-unswitch. Default value is 3.
Usage: -mllvm -aggressive-loop-unswitch -mllvm -unswitch-identical-branches-min-count=<n> where n is a positive integer and lower value of <n> facilitates more unswitching.
Enables unswitching of a loop with respect to a branch conditional value (B), where B appears in at most <n> compares in the loop. This option is enabled with -aggressive-loop-unswitch. Default value is 6.
Usage: -mllvm -aggressive-loop-unswitch -mllvm -unswitch-identical-branches-max-count=<n> where n is a positive integer and higher value of <n> facilitates more unswitching.
Note: These options may facilitate more unswitching in some of the workloads. Since loop-unswitching inherently leads to code bloat, facilitating more unswitching may significantly increase the code size and hence may also lead to longer compilation times.
Run cleanup optimization passes after vectorization.
This option eliminates the array computations based on their usage. The computations on unused array elements and computations on zero valued array elements are eliminated with this optimization. -flto as whole program analysis is required to perform this optimization.
Possible values:
This option enables an optimization that does the slp vectorization across basic blocks. The SLP vectorizer vectorizes instructions within basic blocks. The global slp vectorizer analyzes instructions across basic blocks and vectorizes them.
Converts the call to floating point exponent version of pow to its integer exponent version if the floating-point exponent can be converted to integer. This option is set to true by default.
This option enables the classical loop fusion transformation where the bodies of multiple loop nests are fused into one loop nest. The transformation checks various legality criteria involving the bounds of the loop nests involved, the control flow nesting of the loop nests and so on.
Loop fusion enables reuse of memory access operations across the loop nests and is also beneficial for cache performance. As part of the profitability check for this transformation it uses code size thresholds which control the size of the fused loop body created.
This transformation is off by default and may be enabled by using this option.
Instructs the linker to use the first definition encountered for a symbol, and ignore all others.
Instructs the compiler to link with AMD-supported optimized math library.
Use the jemalloc library, which is a general purpose malloc(3) implementation that emphasizes fragmentation avoidance and scalable concurrency support.
Instructs the compiler to link with flang Fortran runtime libraries.
Generate code for a 64-bit environment. The 64-bit environment sets int to 32 bits and long and pointer to 64 bits and generates code for AMD's x86-64 architecture. The compiler generates AMD64, INTEL64, x86-64 64-bit ABI. The default on a 32-bit host is 32-bit ABI. The default on a 64-bit host is 64-bit ABI if the target platform specified is 64-bit, otherwise the default is 32-bit.
This optimization enables generation of prefetch instructions for tightly coupled loops
Enables estimation of the virtual register pressure before performing loop invariant code motion. This estimation is used to decide the invariants that will be hoisted during loop invariant code motion.
Generate output files in LLVM formats suitable for link time optimization. When used with -S this generates LLVM intermediate language assembly files, otherwise this generates LLVM bitcode format object files (which may be passed to the linker depending on the stage selection options).
This flag enables vectorization of loops with complex control flow that can not be vectorized by loop and slp vectorizers.
This option enables an optimization that generates and calls specialized function versions when they are called with constant arguments. This optimization helps in function inlining.
Force the alignment of all blocks that have no fall-through predecessors (i.e. don't add nops that are executed). In log2 format (e.g 4 means align on 16B boundaries).
This option eliminates the array computations based on their usage. The computations on unused array elements and computations on zero valued array elements are eliminated with this optimization. -flto as whole program analysis is required to perform this optimization.
Possible values:
This option enables the classical loop fusion transformation where the bodies of multiple loop nests are fused into one loop nest. The transformation checks various legality criteria involving the bounds of the loop nests involved, the control flow nesting of the loop nests and so on.
Loop fusion enables reuse of memory access operations across the loop nests and is also beneficial for cache performance. As part of the profitability check for this transformation it uses code size thresholds which control the size of the fused loop body created.
This transformation is off by default and may be enabled by using this option.
flang option to preserve array access information for linearized arrays.
Like -O2, except that it enables optimizations that take longer to perform or that may generate larger code (in an attempt to make the program run faster).
If multiple "O" options are used, with or without level numbers, the last such option is the one that is effective.
Specify that Clang should generate code for a specific processor family member and later. For example, if you specify -march=znver1, the compiler is allowed to generate instructions that are valid on AMD Zen processors, but which may not exist on earlier products.
Use the given vector functions library.
Enables a range of optimizations that provide faster, though sometimes less precise, mathematical operations that may not conform to the IEEE-754 specifications. When this option is specified, the __STDC_IEC_559__ macro is ignored even if set by the system headers.
Instructs the compiler to conform to the IEEE-754 specifications. The compiler will Perform floating-point operations in strict conformance with the IEEE 754 standard. Some optimizations are disabled when this option is specified
Allocate local variables on the stack, thus allowing recursion. SAVEd, data-initialized, or namelist members are always allocated statically, regardless of the setting of this switch.
Enables fusion of adjacent tiled loops as a part of loop tiling transformation. This option is set to false by default.
This option instructs the compiler to unroll loops wherever possible.
Run cleanup optimization passes after vectorization.
Enables loop strength reduction for nested loop structures. By default, the compiler will do loop strength reduction only for the innermost loop.
Enables estimation of the virtual register pressure before performing loop invariant code motion. This estimation is used to decide the invariants that will be hoisted during loop invariant code motion.
This option eliminates the array computations based on their usage. The computations on unused array elements and computations on zero valued array elements are eliminated with this optimization. -flto as whole program analysis is required to perform this optimization.
Possible values:
This option enables an optimization that does the slp vectorization across basic blocks. The SLP vectorizer vectorizes instructions within basic blocks. The global slp vectorizer analyzes instructions across basic blocks and vectorizes them.
This option enables the classical loop fusion transformation where the bodies of multiple loop nests are fused into one loop nest. The transformation checks various legality criteria involving the bounds of the loop nests involved, the control flow nesting of the loop nests and so on.
Loop fusion enables reuse of memory access operations across the loop nests and is also beneficial for cache performance. As part of the profitability check for this transformation it uses code size thresholds which control the size of the fused loop body created.
This transformation is off by default and may be enabled by using this option.
This option enables the classical loop interchange or loop permutation transformation on a loop nest.
This transformation reorders the loops in a multi-dimensional loop nest, checking for various legality criteria in the process. The goal is to find a reordering of the loops such that the number of loop invariant expressions that may be hoisted out from an inner loop to a loop at a higher level is maximized.
This transformation is off by default and may be enabled by using this option.
When loop interchange is enabled, this option enables the heuristic which determines the best reordering of the loops in a multi-dimensional loop nest such that the number of invariant expressions that may be hoisted out from an inner level loop to an outer one is maximized. This option is off by default.
Instructs the linker to use the first definition encountered for a symbol, and ignore all others.
Instructs the compiler to link with AMD-supported optimized math library.
Use the jemalloc library, which is a general purpose malloc(3) implementation that emphasizes fragmentation avoidance and scalable concurrency support.
Instructs the compiler to link with flang Fortran runtime libraries.
Generate code for a 64-bit environment. The 64-bit environment sets int to 32 bits and long and pointer to 64 bits and generates code for AMD's x86-64 architecture. The compiler generates AMD64, INTEL64, x86-64 64-bit ABI. The default on a 32-bit host is 32-bit ABI. The default on a 64-bit host is 64-bit ABI if the target platform specified is 64-bit, otherwise the default is 32-bit.
This optimization enables generation of prefetch instructions for tightly coupled loops
Enables estimation of the virtual register pressure before performing loop invariant code motion. This estimation is used to decide the invariants that will be hoisted during loop invariant code motion.
Generate output files in LLVM formats suitable for link time optimization. When used with -S this generates LLVM intermediate language assembly files, otherwise this generates LLVM bitcode format object files (which may be passed to the linker depending on the stage selection options).
This flag enables vectorization of loops with complex control flow that can not be vectorized by loop and slp vectorizers.
This option enables an optimization that generates and calls specialized function versions when they are called with constant arguments. This optimization helps in function inlining.
Force the alignment of all blocks that have no fall-through predecessors (i.e. don't add nops that are executed). In log2 format (e.g 4 means align on 16B boundaries).
This option eliminates the array computations based on their usage. The computations on unused array elements and computations on zero valued array elements are eliminated with this optimization. -flto as whole program analysis is required to perform this optimization.
Possible values:
This option enables the classical loop fusion transformation where the bodies of multiple loop nests are fused into one loop nest. The transformation checks various legality criteria involving the bounds of the loop nests involved, the control flow nesting of the loop nests and so on.
Loop fusion enables reuse of memory access operations across the loop nests and is also beneficial for cache performance. As part of the profitability check for this transformation it uses code size thresholds which control the size of the fused loop body created.
This transformation is off by default and may be enabled by using this option.
Like -O2, except that it enables optimizations that take longer to perform or that may generate larger code (in an attempt to make the program run faster).
If multiple "O" options are used, with or without level numbers, the last such option is the one that is effective.
Specify that Clang should generate code for a specific processor family member and later. For example, if you specify -march=znver1, the compiler is allowed to generate instructions that are valid on AMD Zen processors, but which may not exist on earlier products.
Use the given vector functions library.
Enables a range of optimizations that provide faster, though sometimes less precise, mathematical operations that may not conform to the IEEE-754 specifications. When this option is specified, the __STDC_IEC_559__ macro is ignored even if set by the system headers.
Analyzes the whole program to determine if the structures in the code can be peeled and if pointer or integer fields in the structure can be compressed. If feasible, this optimization transforms the code to enable these improvements. This transformation is likely to improve cache utilization and memory bandwidth. This, in turn, is expected to improve the scalability of programs executed on multiple cores.
This is effective only under -flto as whole program analysis is required to perform this optimization. You can choose different levels of aggressiveness with which this optimization can be applied to your application with 1 being the least aggressive and 7 being the most aggressive level.
Possible values:
Note:
fstruct-layout=4 and fstruct-layout=5 are derived from fstruct-layout=2 and fstruct-layout=3 respectively with the added feature of safe compression of integer fields in structures. Going from fstruct-layout=4 to fstruct-layout=5 may result in higher performance if the pointer values are such that the pointers can be compressed to 16-bits.
fstruct-layout=6 and fstruct-layout=7 are derived from fstruct-layout=2 and fstruct-layout=3 respectively with the added feature of compression of integer fields in structures. These are similar to fstruct-layout=4 and fstruct-layout=5, but here, the integer fields of the structures are always compressed from 64-bits to 32-bits without any safety guarantee.
Sets the limit at which loops will be unrolled. For example, if unroll threshold is set to 100 then only loops with 100 or fewer instructions will be unrolled.
Sets the compiler's inlining threshold level to the value passed as the argument. The inline threshold is used in the inliner heuristics to decide which functions should be inlined.
This option enables an optimization that transforms the data layout of a single dimensional array to provide better cache locality by analysing the access patterns.
This option enables an optimization that generates and calls specialized function versions when they are called with constant arguments. This optimization helps in function inlining.
This option enables an optimization that generates and calls specialized function versions when the loops inside function are vectorizable and the arguments are not aliased with each other. This optimization helps in function inlining and vectorization.
This option enables the GVN hoist pass, which is used to hoist computations from branches.
This option enables an optimization that does the slp vectorization across basic blocks. The SLP vectorizer vectorizes instructions within basic blocks. The global slp vectorizer analyzes instructions across basic blocks and vectorizes them.
Enables estimation of the virtual register pressure before performing loop invariant code motion. This estimation is used to decide the invariants that will be hoisted during loop invariant code motion.
This option eliminates the array computations based on their usage. The computations on unused array elements and computations on zero valued array elements are eliminated with this optimization. -flto as whole program analysis is required to perform this optimization.
Possible values:
This option enables the classical loop fusion transformation where the bodies of multiple loop nests are fused into one loop nest. The transformation checks various legality criteria involving the bounds of the loop nests involved, the control flow nesting of the loop nests and so on.
Loop fusion enables reuse of memory access operations across the loop nests and is also beneficial for cache performance. As part of the profitability check for this transformation it uses code size thresholds which control the size of the fused loop body created.
This transformation is off by default and may be enabled by using this option.
flang option to preserve array access information for linearized arrays.
Instructs the compiler to conform to the IEEE-754 specifications. The compiler will Perform floating-point operations in strict conformance with the IEEE 754 standard. Some optimizations are disabled when this option is specified
Allocate local variables on the stack, thus allowing recursion. SAVEd, data-initialized, or namelist members are always allocated statically, regardless of the setting of this switch.
Enables fusion of adjacent tiled loops as a part of loop tiling transformation. This option is set to false by default.
This option instructs the compiler to unroll loops wherever possible.
Run cleanup optimization passes after vectorization.
Enables loop strength reduction for nested loop structures. By default, the compiler will do loop strength reduction only for the innermost loop.
This option enables the classical loop interchange or loop permutation transformation on a loop nest.
This transformation reorders the loops in a multi-dimensional loop nest, checking for various legality criteria in the process. The goal is to find a reordering of the loops such that the number of loop invariant expressions that may be hoisted out from an inner loop to a loop at a higher level is maximized.
This transformation is off by default and may be enabled by using this option.
When loop interchange is enabled, this option enables the heuristic which determines the best reordering of the loops in a multi-dimensional loop nest such that the number of invariant expressions that may be hoisted out from an inner level loop to an outer one is maximized. This option is off by default.
Instructs the linker to use the first definition encountered for a symbol, and ignore all others.
Instructs the compiler to link with AMD-supported optimized math library.
Use the jemalloc library, which is a general purpose malloc(3) implementation that emphasizes fragmentation avoidance and scalable concurrency support.
Instructs the compiler to link with flang Fortran runtime libraries.
Generate code for a 64-bit environment. The 64-bit environment sets int to 32 bits and long and pointer to 64 bits and generates code for AMD's x86-64 architecture. The compiler generates AMD64, INTEL64, x86-64 64-bit ABI. The default on a 32-bit host is 32-bit ABI. The default on a 64-bit host is 64-bit ABI if the target platform specified is 64-bit, otherwise the default is 32-bit.
Selects the C++ language dialect.
This option disables generation of the adx instruction.
This option disables the generation of SSE4a instructions.
This option controls whether AOCC emits (true) or does not emit (false) a vzeroupper instruction before a transfer of control flow. Not emitting the vzeroupper instruction can help minimize the AVX to SSE transition penalty.
Generate output files in LLVM formats suitable for link time optimization. When used with -S this generates LLVM intermediate language assembly files, otherwise this generates LLVM bitcode format object files (which may be passed to the linker depending on the stage selection options).
This flag enables vectorization of loops with complex control flow that can not be vectorized by loop and slp vectorizers.
This option enables an optimization that generates and calls specialized function versions when they are called with constant arguments. This optimization helps in function inlining.
Force the alignment of all blocks that have no fall-through predecessors (i.e. don't add nops that are executed). In log2 format (e.g 4 means align on 16B boundaries).
This option eliminates the array computations based on their usage. The computations on unused array elements and computations on zero valued array elements are eliminated with this optimization. -flto as whole program analysis is required to perform this optimization.
Possible values:
This option enables the classical loop fusion transformation where the bodies of multiple loop nests are fused into one loop nest. The transformation checks various legality criteria involving the bounds of the loop nests involved, the control flow nesting of the loop nests and so on.
Loop fusion enables reuse of memory access operations across the loop nests and is also beneficial for cache performance. As part of the profitability check for this transformation it uses code size thresholds which control the size of the fused loop body created.
This transformation is off by default and may be enabled by using this option.
Like -O2, except that it enables optimizations that take longer to perform or that may generate larger code (in an attempt to make the program run faster).
If multiple "O" options are used, with or without level numbers, the last such option is the one that is effective.
Specify that Clang should generate code for a specific processor family member and later. For example, if you specify -march=znver1, the compiler is allowed to generate instructions that are valid on AMD Zen processors, but which may not exist on earlier products.
Use the given vector functions library.
Enables a range of optimizations that provide faster, though sometimes less precise, mathematical operations that may not conform to the IEEE-754 specifications. When this option is specified, the __STDC_IEC_559__ macro is ignored even if set by the system headers.
Analyzes the whole program to determine if the structures in the code can be peeled and if pointer or integer fields in the structure can be compressed. If feasible, this optimization transforms the code to enable these improvements. This transformation is likely to improve cache utilization and memory bandwidth. This, in turn, is expected to improve the scalability of programs executed on multiple cores.
This is effective only under -flto as whole program analysis is required to perform this optimization. You can choose different levels of aggressiveness with which this optimization can be applied to your application with 1 being the least aggressive and 7 being the most aggressive level.
Possible values:
Note:
fstruct-layout=4 and fstruct-layout=5 are derived from fstruct-layout=2 and fstruct-layout=3 respectively with the added feature of safe compression of integer fields in structures. Going from fstruct-layout=4 to fstruct-layout=5 may result in higher performance if the pointer values are such that the pointers can be compressed to 16-bits.
fstruct-layout=6 and fstruct-layout=7 are derived from fstruct-layout=2 and fstruct-layout=3 respectively with the added feature of compression of integer fields in structures. These are similar to fstruct-layout=4 and fstruct-layout=5, but here, the integer fields of the structures are always compressed from 64-bits to 32-bits without any safety guarantee.
Sets the limit at which loops will be unrolled. For example, if unroll threshold is set to 100 then only loops with 100 or fewer instructions will be unrolled.
Sets the compiler's inlining threshold level to the value passed as the argument. The inline threshold is used in the inliner heuristics to decide which functions should be inlined.
This option enables an optimization that transforms the data layout of a single dimensional array to provide better cache locality by analysing the access patterns.
This option enables an optimization that generates and calls specialized function versions when they are called with constant arguments. This optimization helps in function inlining.
This option enables an optimization that generates and calls specialized function versions when the loops inside function are vectorizable and the arguments are not aliased with each other. This optimization helps in function inlining and vectorization.
This option enables the GVN hoist pass, which is used to hoist computations from branches.
This option enables an optimization that does the slp vectorization across basic blocks. The SLP vectorizer vectorizes instructions within basic blocks. The global slp vectorizer analyzes instructions across basic blocks and vectorizes them.
Enables estimation of the virtual register pressure before performing loop invariant code motion. This estimation is used to decide the invariants that will be hoisted during loop invariant code motion.
This option eliminates the array computations based on their usage. The computations on unused array elements and computations on zero valued array elements are eliminated with this optimization. -flto as whole program analysis is required to perform this optimization.
Possible values:
This option enables the classical loop fusion transformation where the bodies of multiple loop nests are fused into one loop nest. The transformation checks various legality criteria involving the bounds of the loop nests involved, the control flow nesting of the loop nests and so on.
Loop fusion enables reuse of memory access operations across the loop nests and is also beneficial for cache performance. As part of the profitability check for this transformation it uses code size thresholds which control the size of the fused loop body created.
This transformation is off by default and may be enabled by using this option.
This optimization does partial unswitching of loops where some part of the unswitched control flow remains in the loop.
Sets the limit at which loops will be unrolled. For example, if unroll threshold is set to 100 then only loops with 100 or fewer instructions will be unrolled.
Sets the compiler's inlining heuristics to an aggressive level by increasing the inline thresholds.
Sets the limit at which loops will be unswitched. For example, if unswitch threshold is set to 100 then only loops with 100 or fewer instructions will be unswtched.
Run the loop rerolling pass.
This option enables aggressive loop unswitching heuristic (including -enable-partial-unswitch) based on the usage of the branch conditional values. Loop unswitching leads to code-bloat. Code-bloat can be minimized if the hoisted condition is executed more often. This heuristic prioritizes the conditions based on the number of times they are used within the loop. The heuristic can be controlled with the following options:
Enables unswitching of a loop with respect to a branch conditional value (B), where B appears in at least <n> compares in the loop. This option is enabled with -aggressive-loop-unswitch. Default value is 3.
Usage: -mllvm -aggressive-loop-unswitch -mllvm -unswitch-identical-branches-min-count=<n> where n is a positive integer and lower value of <n> facilitates more unswitching.
Enables unswitching of a loop with respect to a branch conditional value (B), where B appears in at most <n> compares in the loop. This option is enabled with -aggressive-loop-unswitch. Default value is 6.
Usage: -mllvm -aggressive-loop-unswitch -mllvm -unswitch-identical-branches-max-count=<n> where n is a positive integer and higher value of <n> facilitates more unswitching.
Note: These options may facilitate more unswitching in some of the workloads. Since loop-unswitching inherently leads to code bloat, facilitating more unswitching may significantly increase the code size and hence may also lead to longer compilation times.
Run cleanup optimization passes after vectorization.
Converts the call to floating point exponent version of pow to its integer exponent version if the floating-point exponent can be converted to integer. This option is set to true by default.
Instructs the linker to use the first definition encountered for a symbol, and ignore all others.
Instructs the compiler to link with AMD-supported optimized math library.
Use the jemalloc library, which is a general purpose malloc(3) implementation that emphasizes fragmentation avoidance and scalable concurrency support.
Instructs the compiler to link with flang Fortran runtime libraries.
Generate code for a 64-bit environment. The 64-bit environment sets int to 32 bits and long and pointer to 64 bits and generates code for AMD's x86-64 architecture. The compiler generates AMD64, INTEL64, x86-64 64-bit ABI. The default on a 32-bit host is 32-bit ABI. The default on a 64-bit host is 64-bit ABI if the target platform specified is 64-bit, otherwise the default is 32-bit.
Selects the C++ language dialect.
This option disables generation of the adx instruction.
This option disables the generation of SSE4a instructions.
This option controls whether AOCC emits (true) or does not emit (false) a vzeroupper instruction before a transfer of control flow. Not emitting the vzeroupper instruction can help minimize the AVX to SSE transition penalty.
Generate output files in LLVM formats suitable for link time optimization. When used with -S this generates LLVM intermediate language assembly files, otherwise this generates LLVM bitcode format object files (which may be passed to the linker depending on the stage selection options).
This flag enables vectorization of loops with complex control flow that can not be vectorized by loop and slp vectorizers.
This option enables an optimization that generates and calls specialized function versions when they are called with constant arguments. This optimization helps in function inlining.
Force the alignment of all blocks that have no fall-through predecessors (i.e. don't add nops that are executed). In log2 format (e.g 4 means align on 16B boundaries).
This option eliminates the array computations based on their usage. The computations on unused array elements and computations on zero valued array elements are eliminated with this optimization. -flto as whole program analysis is required to perform this optimization.
Possible values:
This option enables the classical loop fusion transformation where the bodies of multiple loop nests are fused into one loop nest. The transformation checks various legality criteria involving the bounds of the loop nests involved, the control flow nesting of the loop nests and so on.
Loop fusion enables reuse of memory access operations across the loop nests and is also beneficial for cache performance. As part of the profitability check for this transformation it uses code size thresholds which control the size of the fused loop body created.
This transformation is off by default and may be enabled by using this option.
Like -O2, except that it enables optimizations that take longer to perform or that may generate larger code (in an attempt to make the program run faster).
If multiple "O" options are used, with or without level numbers, the last such option is the one that is effective.
Specify that Clang should generate code for a specific processor family member and later. For example, if you specify -march=znver1, the compiler is allowed to generate instructions that are valid on AMD Zen processors, but which may not exist on earlier products.
Use the given vector functions library.
Enables a range of optimizations that provide faster, though sometimes less precise, mathematical operations that may not conform to the IEEE-754 specifications. When this option is specified, the __STDC_IEC_559__ macro is ignored even if set by the system headers.
Analyzes the whole program to determine if the structures in the code can be peeled and if pointer or integer fields in the structure can be compressed. If feasible, this optimization transforms the code to enable these improvements. This transformation is likely to improve cache utilization and memory bandwidth. This, in turn, is expected to improve the scalability of programs executed on multiple cores.
This is effective only under -flto as whole program analysis is required to perform this optimization. You can choose different levels of aggressiveness with which this optimization can be applied to your application with 1 being the least aggressive and 7 being the most aggressive level.
Possible values:
Note:
fstruct-layout=4 and fstruct-layout=5 are derived from fstruct-layout=2 and fstruct-layout=3 respectively with the added feature of safe compression of integer fields in structures. Going from fstruct-layout=4 to fstruct-layout=5 may result in higher performance if the pointer values are such that the pointers can be compressed to 16-bits.
fstruct-layout=6 and fstruct-layout=7 are derived from fstruct-layout=2 and fstruct-layout=3 respectively with the added feature of compression of integer fields in structures. These are similar to fstruct-layout=4 and fstruct-layout=5, but here, the integer fields of the structures are always compressed from 64-bits to 32-bits without any safety guarantee.
Sets the limit at which loops will be unrolled. For example, if unroll threshold is set to 100 then only loops with 100 or fewer instructions will be unrolled.
Sets the compiler's inlining threshold level to the value passed as the argument. The inline threshold is used in the inliner heuristics to decide which functions should be inlined.
This option enables an optimization that transforms the data layout of a single dimensional array to provide better cache locality by analysing the access patterns.
This option enables an optimization that generates and calls specialized function versions when they are called with constant arguments. This optimization helps in function inlining.
This option enables an optimization that generates and calls specialized function versions when the loops inside function are vectorizable and the arguments are not aliased with each other. This optimization helps in function inlining and vectorization.
This option enables the GVN hoist pass, which is used to hoist computations from branches.
This option enables an optimization that does the slp vectorization across basic blocks. The SLP vectorizer vectorizes instructions within basic blocks. The global slp vectorizer analyzes instructions across basic blocks and vectorizes them.
Enables estimation of the virtual register pressure before performing loop invariant code motion. This estimation is used to decide the invariants that will be hoisted during loop invariant code motion.
This option eliminates the array computations based on their usage. The computations on unused array elements and computations on zero valued array elements are eliminated with this optimization. -flto as whole program analysis is required to perform this optimization.
Possible values:
This option enables the classical loop fusion transformation where the bodies of multiple loop nests are fused into one loop nest. The transformation checks various legality criteria involving the bounds of the loop nests involved, the control flow nesting of the loop nests and so on.
Loop fusion enables reuse of memory access operations across the loop nests and is also beneficial for cache performance. As part of the profitability check for this transformation it uses code size thresholds which control the size of the fused loop body created.
This transformation is off by default and may be enabled by using this option.
This optimization does partial unswitching of loops where some part of the unswitched control flow remains in the loop.
Sets the limit at which loops will be unrolled. For example, if unroll threshold is set to 100 then only loops with 100 or fewer instructions will be unrolled.
Sets the compiler's inlining heuristics to an aggressive level by increasing the inline thresholds.
Sets the limit at which loops will be unswitched. For example, if unswitch threshold is set to 100 then only loops with 100 or fewer instructions will be unswtched.
Run the loop rerolling pass.
This option enables aggressive loop unswitching heuristic (including -enable-partial-unswitch) based on the usage of the branch conditional values. Loop unswitching leads to code-bloat. Code-bloat can be minimized if the hoisted condition is executed more often. This heuristic prioritizes the conditions based on the number of times they are used within the loop. The heuristic can be controlled with the following options:
Enables unswitching of a loop with respect to a branch conditional value (B), where B appears in at least <n> compares in the loop. This option is enabled with -aggressive-loop-unswitch. Default value is 3.
Usage: -mllvm -aggressive-loop-unswitch -mllvm -unswitch-identical-branches-min-count=<n> where n is a positive integer and lower value of <n> facilitates more unswitching.
Enables unswitching of a loop with respect to a branch conditional value (B), where B appears in at most <n> compares in the loop. This option is enabled with -aggressive-loop-unswitch. Default value is 6.
Usage: -mllvm -aggressive-loop-unswitch -mllvm -unswitch-identical-branches-max-count=<n> where n is a positive integer and higher value of <n> facilitates more unswitching.
Note: These options may facilitate more unswitching in some of the workloads. Since loop-unswitching inherently leads to code bloat, facilitating more unswitching may significantly increase the code size and hence may also lead to longer compilation times.
Run cleanup optimization passes after vectorization.
Converts the call to floating point exponent version of pow to its integer exponent version if the floating-point exponent can be converted to integer. This option is set to true by default.
flang option to preserve array access information for linearized arrays.
Instructs the compiler to conform to the IEEE-754 specifications. The compiler will Perform floating-point operations in strict conformance with the IEEE 754 standard. Some optimizations are disabled when this option is specified
Allocate local variables on the stack, thus allowing recursion. SAVEd, data-initialized, or namelist members are always allocated statically, regardless of the setting of this switch.
Enables fusion of adjacent tiled loops as a part of loop tiling transformation. This option is set to false by default.
This option instructs the compiler to unroll loops wherever possible.
Enables loop strength reduction for nested loop structures. By default, the compiler will do loop strength reduction only for the innermost loop.
This option enables the classical loop interchange or loop permutation transformation on a loop nest.
This transformation reorders the loops in a multi-dimensional loop nest, checking for various legality criteria in the process. The goal is to find a reordering of the loops such that the number of loop invariant expressions that may be hoisted out from an inner loop to a loop at a higher level is maximized.
This transformation is off by default and may be enabled by using this option.
When loop interchange is enabled, this option enables the heuristic which determines the best reordering of the loops in a multi-dimensional loop nest such that the number of invariant expressions that may be hoisted out from an inner level loop to an outer one is maximized. This option is off by default.
Instructs the linker to use the first definition encountered for a symbol, and ignore all others.
Instructs the compiler to link with AMD-supported optimized math library.
Use the jemalloc library, which is a general purpose malloc(3) implementation that emphasizes fragmentation avoidance and scalable concurrency support.
Instructs the compiler to link with flang Fortran runtime libraries.
Generate code for a 64-bit environment. The 64-bit environment sets int to 32 bits and long and pointer to 64 bits and generates code for AMD's x86-64 architecture. The compiler generates AMD64, INTEL64, x86-64 64-bit ABI. The default on a 32-bit host is 32-bit ABI. The default on a 64-bit host is 64-bit ABI if the target platform specified is 64-bit, otherwise the default is 32-bit.
Generate output files in LLVM formats suitable for link time optimization. When used with -S this generates LLVM intermediate language assembly files, otherwise this generates LLVM bitcode format object files (which may be passed to the linker depending on the stage selection options).
This option enables an optimization that generates and calls specialized function versions when they are called with constant arguments. This optimization helps in function inlining.
Force the alignment of all blocks that have no fall-through predecessors (i.e. don't add nops that are executed). In log2 format (e.g 4 means align on 16B boundaries).
This option eliminates the array computations based on their usage. The computations on unused array elements and computations on zero valued array elements are eliminated with this optimization. -flto as whole program analysis is required to perform this optimization.
Possible values:
Enables all the optimizations from -O3 along with other aggressive optimizations that may violate strict compliance with language standards. Refer to the AOCC options document for the language you're using for more detailed documentation of optimizations enabled under -Ofast.
Specify that Clang should generate code for a specific processor family member and later. For example, if you specify -march=znver1, the compiler is allowed to generate instructions that are valid on AMD Zen processors, but which may not exist on earlier products.
Use the given vector functions library.
Enables a range of optimizations that provide faster, though sometimes less precise, mathematical operations that may not conform to the IEEE-754 specifications. When this option is specified, the __STDC_IEC_559__ macro is ignored even if set by the system headers.
Analyzes the whole program to determine if the structures in the code can be peeled and if pointer or integer fields in the structure can be compressed. If feasible, this optimization transforms the code to enable these improvements. This transformation is likely to improve cache utilization and memory bandwidth. This, in turn, is expected to improve the scalability of programs executed on multiple cores.
This is effective only under -flto as whole program analysis is required to perform this optimization. You can choose different levels of aggressiveness with which this optimization can be applied to your application with 1 being the least aggressive and 7 being the most aggressive level.
Possible values:
Note:
fstruct-layout=4 and fstruct-layout=5 are derived from fstruct-layout=2 and fstruct-layout=3 respectively with the added feature of safe compression of integer fields in structures. Going from fstruct-layout=4 to fstruct-layout=5 may result in higher performance if the pointer values are such that the pointers can be compressed to 16-bits.
fstruct-layout=6 and fstruct-layout=7 are derived from fstruct-layout=2 and fstruct-layout=3 respectively with the added feature of compression of integer fields in structures. These are similar to fstruct-layout=4 and fstruct-layout=5, but here, the integer fields of the structures are always compressed from 64-bits to 32-bits without any safety guarantee.
Sets the limit at which loops will be unrolled. For example, if unroll threshold is set to 100 then only loops with 100 or fewer instructions will be unrolled.
This option enables an optimization that transforms the data layout of a single dimensional array to provide better cache locality by analysing the access patterns.
This option enables an optimization that generates and calls specialized function versions when the loops inside function are vectorizable and the arguments are not aliased with each other. This optimization helps in function inlining and vectorization.
Sets the compiler's inlining threshold level to the value passed as the argument. The inline threshold is used in the inliner heuristics to decide which functions should be inlined.
This option enables the GVN hoist pass, which is used to hoist computations from branches.
This option enables an optimization that does the slp vectorization across basic blocks. The SLP vectorizer vectorizes instructions within basic blocks. The global slp vectorizer analyzes instructions across basic blocks and vectorizes them.
This option enables an optimization that generates and calls specialized function versions when they are called with constant arguments. This optimization helps in function inlining.
Enables estimation of the virtual register pressure before performing loop invariant code motion. This estimation is used to decide the invariants that will be hoisted during loop invariant code motion.
This option eliminates the array computations based on their usage. The computations on unused array elements and computations on zero valued array elements are eliminated with this optimization. -flto as whole program analysis is required to perform this optimization.
Possible values:
Instructs the compiler to link with AMD-supported optimized math library.
Use the jemalloc library, which is a general purpose malloc(3) implementation that emphasizes fragmentation avoidance and scalable concurrency support.
Generate code for a 64-bit environment. The 64-bit environment sets int to 32 bits and long and pointer to 64 bits and generates code for AMD's x86-64 architecture. The compiler generates AMD64, INTEL64, x86-64 64-bit ABI. The default on a 32-bit host is 32-bit ABI. The default on a 64-bit host is 64-bit ABI if the target platform specified is 64-bit, otherwise the default is 32-bit.
Selects the C++ language dialect.
This option disables generation of the adx instruction.
This option disables the generation of SSE4a instructions.
This option controls whether AOCC emits (true) or does not emit (false) a vzeroupper instruction before a transfer of control flow. Not emitting the vzeroupper instruction can help minimize the AVX to SSE transition penalty.
Enables estimation of the virtual register pressure before performing loop invariant code motion. This estimation is used to decide the invariants that will be hoisted during loop invariant code motion.
Generate output files in LLVM formats suitable for link time optimization. When used with -S this generates LLVM intermediate language assembly files, otherwise this generates LLVM bitcode format object files (which may be passed to the linker depending on the stage selection options).
This option enables an optimization that generates and calls specialized function versions when they are called with constant arguments. This optimization helps in function inlining.
Force the alignment of all blocks that have no fall-through predecessors (i.e. don't add nops that are executed). In log2 format (e.g 4 means align on 16B boundaries).
This option eliminates the array computations based on their usage. The computations on unused array elements and computations on zero valued array elements are eliminated with this optimization. -flto as whole program analysis is required to perform this optimization.
Possible values:
Enables all the optimizations from -O3 along with other aggressive optimizations that may violate strict compliance with language standards. Refer to the AOCC options document for the language you're using for more detailed documentation of optimizations enabled under -Ofast.
Specify that Clang should generate code for a specific processor family member and later. For example, if you specify -march=znver1, the compiler is allowed to generate instructions that are valid on AMD Zen processors, but which may not exist on earlier products.
Use the given vector functions library.
Enables a range of optimizations that provide faster, though sometimes less precise, mathematical operations that may not conform to the IEEE-754 specifications. When this option is specified, the __STDC_IEC_559__ macro is ignored even if set by the system headers.
Sets the compiler's inlining heuristics to an aggressive level by increasing the inline thresholds.
Sets the limit at which loops will be unrolled. For example, if unroll threshold is set to 100 then only loops with 100 or fewer instructions will be unrolled.
This option enables an optimization that generates and calls specialized function versions when the loops inside function are vectorizable and the arguments are not aliased with each other. This optimization helps in function inlining and vectorization.
Enables estimation of the virtual register pressure before performing loop invariant code motion. This estimation is used to decide the invariants that will be hoisted during loop invariant code motion.
Run the loop rerolling pass.
This option enables aggressive loop unswitching heuristic (including -enable-partial-unswitch) based on the usage of the branch conditional values. Loop unswitching leads to code-bloat. Code-bloat can be minimized if the hoisted condition is executed more often. This heuristic prioritizes the conditions based on the number of times they are used within the loop. The heuristic can be controlled with the following options:
Enables unswitching of a loop with respect to a branch conditional value (B), where B appears in at least <n> compares in the loop. This option is enabled with -aggressive-loop-unswitch. Default value is 3.
Usage: -mllvm -aggressive-loop-unswitch -mllvm -unswitch-identical-branches-min-count=<n> where n is a positive integer and lower value of <n> facilitates more unswitching.
Enables unswitching of a loop with respect to a branch conditional value (B), where B appears in at most <n> compares in the loop. This option is enabled with -aggressive-loop-unswitch. Default value is 6.
Usage: -mllvm -aggressive-loop-unswitch -mllvm -unswitch-identical-branches-max-count=<n> where n is a positive integer and higher value of <n> facilitates more unswitching.
Note: These options may facilitate more unswitching in some of the workloads. Since loop-unswitching inherently leads to code bloat, facilitating more unswitching may significantly increase the code size and hence may also lead to longer compilation times.
This option eliminates the array computations based on their usage. The computations on unused array elements and computations on zero valued array elements are eliminated with this optimization. -flto as whole program analysis is required to perform this optimization.
Possible values:
This option enables an optimization that does the slp vectorization across basic blocks. The SLP vectorizer vectorizes instructions within basic blocks. The global slp vectorizer analyzes instructions across basic blocks and vectorizes them.
Instructs the compiler to link with AMD-supported optimized math library.
Use the jemalloc library, which is a general purpose malloc(3) implementation that emphasizes fragmentation avoidance and scalable concurrency support.
Generate code for a 64-bit environment. The 64-bit environment sets int to 32 bits and long and pointer to 64 bits and generates code for AMD's x86-64 architecture. The compiler generates AMD64, INTEL64, x86-64 64-bit ABI. The default on a 32-bit host is 32-bit ABI. The default on a 64-bit host is 64-bit ABI if the target platform specified is 64-bit, otherwise the default is 32-bit.
This optimization enables generation of prefetch instructions for tightly coupled loops
Enables estimation of the virtual register pressure before performing loop invariant code motion. This estimation is used to decide the invariants that will be hoisted during loop invariant code motion.
Generate output files in LLVM formats suitable for link time optimization. When used with -S this generates LLVM intermediate language assembly files, otherwise this generates LLVM bitcode format object files (which may be passed to the linker depending on the stage selection options).
This option enables an optimization that generates and calls specialized function versions when they are called with constant arguments. This optimization helps in function inlining.
Force the alignment of all blocks that have no fall-through predecessors (i.e. don't add nops that are executed). In log2 format (e.g 4 means align on 16B boundaries).
This option eliminates the array computations based on their usage. The computations on unused array elements and computations on zero valued array elements are eliminated with this optimization. -flto as whole program analysis is required to perform this optimization.
Possible values:
Enables all the optimizations from -O3 along with other aggressive optimizations that may violate strict compliance with language standards. Refer to the AOCC options document for the language you're using for more detailed documentation of optimizations enabled under -Ofast.
Specify that Clang should generate code for a specific processor family member and later. For example, if you specify -march=znver1, the compiler is allowed to generate instructions that are valid on AMD Zen processors, but which may not exist on earlier products.
Use the given vector functions library.
Enables a range of optimizations that provide faster, though sometimes less precise, mathematical operations that may not conform to the IEEE-754 specifications. When this option is specified, the __STDC_IEC_559__ macro is ignored even if set by the system headers.
Allocate local variables on the stack, thus allowing recursion. SAVEd, data-initialized, or namelist members are always allocated statically, regardless of the setting of this switch.
This option eliminates the array computations based on their usage. The computations on unused array elements and computations on zero valued array elements are eliminated with this optimization. -flto as whole program analysis is required to perform this optimization.
Possible values:
This option enables an optimization that does the slp vectorization across basic blocks. The SLP vectorizer vectorizes instructions within basic blocks. The global slp vectorizer analyzes instructions across basic blocks and vectorizes them.
Enables estimation of the virtual register pressure before performing loop invariant code motion. This estimation is used to decide the invariants that will be hoisted during loop invariant code motion.
Instructs the compiler to link with AMD-supported optimized math library.
Use the jemalloc library, which is a general purpose malloc(3) implementation that emphasizes fragmentation avoidance and scalable concurrency support.
Instructs the compiler to link with flang Fortran runtime libraries.
Generate code for a 64-bit environment. The 64-bit environment sets int to 32 bits and long and pointer to 64 bits and generates code for AMD's x86-64 architecture. The compiler generates AMD64, INTEL64, x86-64 64-bit ABI. The default on a 32-bit host is 32-bit ABI. The default on a 64-bit host is 64-bit ABI if the target platform specified is 64-bit, otherwise the default is 32-bit.
This optimization enables generation of prefetch instructions for tightly coupled loops
Enables estimation of the virtual register pressure before performing loop invariant code motion. This estimation is used to decide the invariants that will be hoisted during loop invariant code motion.
Generate output files in LLVM formats suitable for link time optimization. When used with -S this generates LLVM intermediate language assembly files, otherwise this generates LLVM bitcode format object files (which may be passed to the linker depending on the stage selection options).
This option enables an optimization that generates and calls specialized function versions when they are called with constant arguments. This optimization helps in function inlining.
Force the alignment of all blocks that have no fall-through predecessors (i.e. don't add nops that are executed). In log2 format (e.g 4 means align on 16B boundaries).
This option eliminates the array computations based on their usage. The computations on unused array elements and computations on zero valued array elements are eliminated with this optimization. -flto as whole program analysis is required to perform this optimization.
Possible values:
Enables all the optimizations from -O3 along with other aggressive optimizations that may violate strict compliance with language standards. Refer to the AOCC options document for the language you're using for more detailed documentation of optimizations enabled under -Ofast.
Specify that Clang should generate code for a specific processor family member and later. For example, if you specify -march=znver1, the compiler is allowed to generate instructions that are valid on AMD Zen processors, but which may not exist on earlier products.
Use the given vector functions library.
Enables a range of optimizations that provide faster, though sometimes less precise, mathematical operations that may not conform to the IEEE-754 specifications. When this option is specified, the __STDC_IEC_559__ macro is ignored even if set by the system headers.
Instructs the compiler to conform to the IEEE-754 specifications. The compiler will Perform floating-point operations in strict conformance with the IEEE 754 standard. Some optimizations are disabled when this option is specified
Allocate local variables on the stack, thus allowing recursion. SAVEd, data-initialized, or namelist members are always allocated statically, regardless of the setting of this switch.
This option eliminates the array computations based on their usage. The computations on unused array elements and computations on zero valued array elements are eliminated with this optimization. -flto as whole program analysis is required to perform this optimization.
Possible values:
This option enables an optimization that does the slp vectorization across basic blocks. The SLP vectorizer vectorizes instructions within basic blocks. The global slp vectorizer analyzes instructions across basic blocks and vectorizes them.
Enables estimation of the virtual register pressure before performing loop invariant code motion. This estimation is used to decide the invariants that will be hoisted during loop invariant code motion.
Instructs the compiler to link with AMD-supported optimized math library.
Use the jemalloc library, which is a general purpose malloc(3) implementation that emphasizes fragmentation avoidance and scalable concurrency support.
Instructs the compiler to link with flang Fortran runtime libraries.
Generate code for a 64-bit environment. The 64-bit environment sets int to 32 bits and long and pointer to 64 bits and generates code for AMD's x86-64 architecture. The compiler generates AMD64, INTEL64, x86-64 64-bit ABI. The default on a 32-bit host is 32-bit ABI. The default on a 64-bit host is 64-bit ABI if the target platform specified is 64-bit, otherwise the default is 32-bit.
This optimization enables generation of prefetch instructions for tightly coupled loops
Enables estimation of the virtual register pressure before performing loop invariant code motion. This estimation is used to decide the invariants that will be hoisted during loop invariant code motion.
Generate output files in LLVM formats suitable for link time optimization. When used with -S this generates LLVM intermediate language assembly files, otherwise this generates LLVM bitcode format object files (which may be passed to the linker depending on the stage selection options).
This option enables an optimization that generates and calls specialized function versions when they are called with constant arguments. This optimization helps in function inlining.
Force the alignment of all blocks that have no fall-through predecessors (i.e. don't add nops that are executed). In log2 format (e.g 4 means align on 16B boundaries).
This option eliminates the array computations based on their usage. The computations on unused array elements and computations on zero valued array elements are eliminated with this optimization. -flto as whole program analysis is required to perform this optimization.
Possible values:
Enables all the optimizations from -O3 along with other aggressive optimizations that may violate strict compliance with language standards. Refer to the AOCC options document for the language you're using for more detailed documentation of optimizations enabled under -Ofast.
Specify that Clang should generate code for a specific processor family member and later. For example, if you specify -march=znver1, the compiler is allowed to generate instructions that are valid on AMD Zen processors, but which may not exist on earlier products.
Use the given vector functions library.
Enables a range of optimizations that provide faster, though sometimes less precise, mathematical operations that may not conform to the IEEE-754 specifications. When this option is specified, the __STDC_IEC_559__ macro is ignored even if set by the system headers.
Allocate local variables on the stack, thus allowing recursion. SAVEd, data-initialized, or namelist members are always allocated statically, regardless of the setting of this switch.
This option eliminates the array computations based on their usage. The computations on unused array elements and computations on zero valued array elements are eliminated with this optimization. -flto as whole program analysis is required to perform this optimization.
Possible values:
This option enables an optimization that does the slp vectorization across basic blocks. The SLP vectorizer vectorizes instructions within basic blocks. The global slp vectorizer analyzes instructions across basic blocks and vectorizes them.
Enables estimation of the virtual register pressure before performing loop invariant code motion. This estimation is used to decide the invariants that will be hoisted during loop invariant code motion.
flang option to preserve array access information for linearized arrays.
Enables fusion of adjacent tiled loops as a part of loop tiling transformation. This option is set to false by default.
Instructs the compiler to link with AMD-supported optimized math library.
Use the jemalloc library, which is a general purpose malloc(3) implementation that emphasizes fragmentation avoidance and scalable concurrency support.
Instructs the compiler to link with flang Fortran runtime libraries.
Generate code for a 64-bit environment. The 64-bit environment sets int to 32 bits and long and pointer to 64 bits and generates code for AMD's x86-64 architecture. The compiler generates AMD64, INTEL64, x86-64 64-bit ABI. The default on a 32-bit host is 32-bit ABI. The default on a 64-bit host is 64-bit ABI if the target platform specified is 64-bit, otherwise the default is 32-bit.
This optimization enables generation of prefetch instructions for tightly coupled loops
Enables estimation of the virtual register pressure before performing loop invariant code motion. This estimation is used to decide the invariants that will be hoisted during loop invariant code motion.
Generate output files in LLVM formats suitable for link time optimization. When used with -S this generates LLVM intermediate language assembly files, otherwise this generates LLVM bitcode format object files (which may be passed to the linker depending on the stage selection options).
This option enables an optimization that generates and calls specialized function versions when they are called with constant arguments. This optimization helps in function inlining.
Force the alignment of all blocks that have no fall-through predecessors (i.e. don't add nops that are executed). In log2 format (e.g 4 means align on 16B boundaries).
This option eliminates the array computations based on their usage. The computations on unused array elements and computations on zero valued array elements are eliminated with this optimization. -flto as whole program analysis is required to perform this optimization.
Possible values:
Enables all the optimizations from -O3 along with other aggressive optimizations that may violate strict compliance with language standards. Refer to the AOCC options document for the language you're using for more detailed documentation of optimizations enabled under -Ofast.
Specify that Clang should generate code for a specific processor family member and later. For example, if you specify -march=znver1, the compiler is allowed to generate instructions that are valid on AMD Zen processors, but which may not exist on earlier products.
Use the given vector functions library.
Enables a range of optimizations that provide faster, though sometimes less precise, mathematical operations that may not conform to the IEEE-754 specifications. When this option is specified, the __STDC_IEC_559__ macro is ignored even if set by the system headers.
Analyzes the whole program to determine if the structures in the code can be peeled and if pointer or integer fields in the structure can be compressed. If feasible, this optimization transforms the code to enable these improvements. This transformation is likely to improve cache utilization and memory bandwidth. This, in turn, is expected to improve the scalability of programs executed on multiple cores.
This is effective only under -flto as whole program analysis is required to perform this optimization. You can choose different levels of aggressiveness with which this optimization can be applied to your application with 1 being the least aggressive and 7 being the most aggressive level.
Possible values:
Note:
fstruct-layout=4 and fstruct-layout=5 are derived from fstruct-layout=2 and fstruct-layout=3 respectively with the added feature of safe compression of integer fields in structures. Going from fstruct-layout=4 to fstruct-layout=5 may result in higher performance if the pointer values are such that the pointers can be compressed to 16-bits.
fstruct-layout=6 and fstruct-layout=7 are derived from fstruct-layout=2 and fstruct-layout=3 respectively with the added feature of compression of integer fields in structures. These are similar to fstruct-layout=4 and fstruct-layout=5, but here, the integer fields of the structures are always compressed from 64-bits to 32-bits without any safety guarantee.
Sets the limit at which loops will be unrolled. For example, if unroll threshold is set to 100 then only loops with 100 or fewer instructions will be unrolled.
This option enables an optimization that transforms the data layout of a single dimensional array to provide better cache locality by analysing the access patterns.
This option enables an optimization that generates and calls specialized function versions when the loops inside function are vectorizable and the arguments are not aliased with each other. This optimization helps in function inlining and vectorization.
Sets the compiler's inlining threshold level to the value passed as the argument. The inline threshold is used in the inliner heuristics to decide which functions should be inlined.
This option enables the GVN hoist pass, which is used to hoist computations from branches.
This option enables an optimization that does the slp vectorization across basic blocks. The SLP vectorizer vectorizes instructions within basic blocks. The global slp vectorizer analyzes instructions across basic blocks and vectorizes them.
This option enables an optimization that generates and calls specialized function versions when they are called with constant arguments. This optimization helps in function inlining.
Enables estimation of the virtual register pressure before performing loop invariant code motion. This estimation is used to decide the invariants that will be hoisted during loop invariant code motion.
This option eliminates the array computations based on their usage. The computations on unused array elements and computations on zero valued array elements are eliminated with this optimization. -flto as whole program analysis is required to perform this optimization.
Possible values:
Allocate local variables on the stack, thus allowing recursion. SAVEd, data-initialized, or namelist members are always allocated statically, regardless of the setting of this switch.
Instructs the compiler to link with AMD-supported optimized math library.
Use the jemalloc library, which is a general purpose malloc(3) implementation that emphasizes fragmentation avoidance and scalable concurrency support.
Instructs the compiler to link with flang Fortran runtime libraries.
Generate code for a 64-bit environment. The 64-bit environment sets int to 32 bits and long and pointer to 64 bits and generates code for AMD's x86-64 architecture. The compiler generates AMD64, INTEL64, x86-64 64-bit ABI. The default on a 32-bit host is 32-bit ABI. The default on a 64-bit host is 64-bit ABI if the target platform specified is 64-bit, otherwise the default is 32-bit.
Selects the C++ language dialect.
This option disables generation of the adx instruction.
This option disables the generation of SSE4a instructions.
This option controls whether AOCC emits (true) or does not emit (false) a vzeroupper instruction before a transfer of control flow. Not emitting the vzeroupper instruction can help minimize the AVX to SSE transition penalty.
Enables estimation of the virtual register pressure before performing loop invariant code motion. This estimation is used to decide the invariants that will be hoisted during loop invariant code motion.
Generate output files in LLVM formats suitable for link time optimization. When used with -S this generates LLVM intermediate language assembly files, otherwise this generates LLVM bitcode format object files (which may be passed to the linker depending on the stage selection options).
This option enables an optimization that generates and calls specialized function versions when they are called with constant arguments. This optimization helps in function inlining.
Force the alignment of all blocks that have no fall-through predecessors (i.e. don't add nops that are executed). In log2 format (e.g 4 means align on 16B boundaries).
This option eliminates the array computations based on their usage. The computations on unused array elements and computations on zero valued array elements are eliminated with this optimization. -flto as whole program analysis is required to perform this optimization.
Possible values:
Enables all the optimizations from -O3 along with other aggressive optimizations that may violate strict compliance with language standards. Refer to the AOCC options document for the language you're using for more detailed documentation of optimizations enabled under -Ofast.
Specify that Clang should generate code for a specific processor family member and later. For example, if you specify -march=znver1, the compiler is allowed to generate instructions that are valid on AMD Zen processors, but which may not exist on earlier products.
Use the given vector functions library.
Enables a range of optimizations that provide faster, though sometimes less precise, mathematical operations that may not conform to the IEEE-754 specifications. When this option is specified, the __STDC_IEC_559__ macro is ignored even if set by the system headers.
Analyzes the whole program to determine if the structures in the code can be peeled and if pointer or integer fields in the structure can be compressed. If feasible, this optimization transforms the code to enable these improvements. This transformation is likely to improve cache utilization and memory bandwidth. This, in turn, is expected to improve the scalability of programs executed on multiple cores.
This is effective only under -flto as whole program analysis is required to perform this optimization. You can choose different levels of aggressiveness with which this optimization can be applied to your application with 1 being the least aggressive and 7 being the most aggressive level.
Possible values:
Note:
fstruct-layout=4 and fstruct-layout=5 are derived from fstruct-layout=2 and fstruct-layout=3 respectively with the added feature of safe compression of integer fields in structures. Going from fstruct-layout=4 to fstruct-layout=5 may result in higher performance if the pointer values are such that the pointers can be compressed to 16-bits.
fstruct-layout=6 and fstruct-layout=7 are derived from fstruct-layout=2 and fstruct-layout=3 respectively with the added feature of compression of integer fields in structures. These are similar to fstruct-layout=4 and fstruct-layout=5, but here, the integer fields of the structures are always compressed from 64-bits to 32-bits without any safety guarantee.
Sets the limit at which loops will be unrolled. For example, if unroll threshold is set to 100 then only loops with 100 or fewer instructions will be unrolled.
This option enables an optimization that transforms the data layout of a single dimensional array to provide better cache locality by analysing the access patterns.
This option enables an optimization that generates and calls specialized function versions when the loops inside function are vectorizable and the arguments are not aliased with each other. This optimization helps in function inlining and vectorization.
Sets the compiler's inlining threshold level to the value passed as the argument. The inline threshold is used in the inliner heuristics to decide which functions should be inlined.
This option enables the GVN hoist pass, which is used to hoist computations from branches.
This option enables an optimization that does the slp vectorization across basic blocks. The SLP vectorizer vectorizes instructions within basic blocks. The global slp vectorizer analyzes instructions across basic blocks and vectorizes them.
This option enables an optimization that generates and calls specialized function versions when they are called with constant arguments. This optimization helps in function inlining.
Enables estimation of the virtual register pressure before performing loop invariant code motion. This estimation is used to decide the invariants that will be hoisted during loop invariant code motion.
This option eliminates the array computations based on their usage. The computations on unused array elements and computations on zero valued array elements are eliminated with this optimization. -flto as whole program analysis is required to perform this optimization.
Possible values:
Sets the compiler's inlining heuristics to an aggressive level by increasing the inline thresholds.
Sets the limit at which loops will be unrolled. For example, if unroll threshold is set to 100 then only loops with 100 or fewer instructions will be unrolled.
Run the loop rerolling pass.
This option enables aggressive loop unswitching heuristic (including -enable-partial-unswitch) based on the usage of the branch conditional values. Loop unswitching leads to code-bloat. Code-bloat can be minimized if the hoisted condition is executed more often. This heuristic prioritizes the conditions based on the number of times they are used within the loop. The heuristic can be controlled with the following options:
Enables unswitching of a loop with respect to a branch conditional value (B), where B appears in at least <n> compares in the loop. This option is enabled with -aggressive-loop-unswitch. Default value is 3.
Usage: -mllvm -aggressive-loop-unswitch -mllvm -unswitch-identical-branches-min-count=<n> where n is a positive integer and lower value of <n> facilitates more unswitching.
Enables unswitching of a loop with respect to a branch conditional value (B), where B appears in at most <n> compares in the loop. This option is enabled with -aggressive-loop-unswitch. Default value is 6.
Usage: -mllvm -aggressive-loop-unswitch -mllvm -unswitch-identical-branches-max-count=<n> where n is a positive integer and higher value of <n> facilitates more unswitching.
Note: These options may facilitate more unswitching in some of the workloads. Since loop-unswitching inherently leads to code bloat, facilitating more unswitching may significantly increase the code size and hence may also lead to longer compilation times.
Instructs the compiler to link with AMD-supported optimized math library.
Use the jemalloc library, which is a general purpose malloc(3) implementation that emphasizes fragmentation avoidance and scalable concurrency support.
Generate code for a 64-bit environment. The 64-bit environment sets int to 32 bits and long and pointer to 64 bits and generates code for AMD's x86-64 architecture. The compiler generates AMD64, INTEL64, x86-64 64-bit ABI. The default on a 32-bit host is 32-bit ABI. The default on a 64-bit host is 64-bit ABI if the target platform specified is 64-bit, otherwise the default is 32-bit.
Selects the C++ language dialect.
This option disables generation of the adx instruction.
This option disables the generation of SSE4a instructions.
This option controls whether AOCC emits (true) or does not emit (false) a vzeroupper instruction before a transfer of control flow. Not emitting the vzeroupper instruction can help minimize the AVX to SSE transition penalty.
Enables estimation of the virtual register pressure before performing loop invariant code motion. This estimation is used to decide the invariants that will be hoisted during loop invariant code motion.
Generate output files in LLVM formats suitable for link time optimization. When used with -S this generates LLVM intermediate language assembly files, otherwise this generates LLVM bitcode format object files (which may be passed to the linker depending on the stage selection options).
This option enables an optimization that generates and calls specialized function versions when they are called with constant arguments. This optimization helps in function inlining.
Force the alignment of all blocks that have no fall-through predecessors (i.e. don't add nops that are executed). In log2 format (e.g 4 means align on 16B boundaries).
This option eliminates the array computations based on their usage. The computations on unused array elements and computations on zero valued array elements are eliminated with this optimization. -flto as whole program analysis is required to perform this optimization.
Possible values:
Enables all the optimizations from -O3 along with other aggressive optimizations that may violate strict compliance with language standards. Refer to the AOCC options document for the language you're using for more detailed documentation of optimizations enabled under -Ofast.
Specify that Clang should generate code for a specific processor family member and later. For example, if you specify -march=znver1, the compiler is allowed to generate instructions that are valid on AMD Zen processors, but which may not exist on earlier products.
Use the given vector functions library.
Enables a range of optimizations that provide faster, though sometimes less precise, mathematical operations that may not conform to the IEEE-754 specifications. When this option is specified, the __STDC_IEC_559__ macro is ignored even if set by the system headers.
Analyzes the whole program to determine if the structures in the code can be peeled and if pointer or integer fields in the structure can be compressed. If feasible, this optimization transforms the code to enable these improvements. This transformation is likely to improve cache utilization and memory bandwidth. This, in turn, is expected to improve the scalability of programs executed on multiple cores.
This is effective only under -flto as whole program analysis is required to perform this optimization. You can choose different levels of aggressiveness with which this optimization can be applied to your application with 1 being the least aggressive and 7 being the most aggressive level.
Possible values:
Note:
fstruct-layout=4 and fstruct-layout=5 are derived from fstruct-layout=2 and fstruct-layout=3 respectively with the added feature of safe compression of integer fields in structures. Going from fstruct-layout=4 to fstruct-layout=5 may result in higher performance if the pointer values are such that the pointers can be compressed to 16-bits.
fstruct-layout=6 and fstruct-layout=7 are derived from fstruct-layout=2 and fstruct-layout=3 respectively with the added feature of compression of integer fields in structures. These are similar to fstruct-layout=4 and fstruct-layout=5, but here, the integer fields of the structures are always compressed from 64-bits to 32-bits without any safety guarantee.
Sets the limit at which loops will be unrolled. For example, if unroll threshold is set to 100 then only loops with 100 or fewer instructions will be unrolled.
This option enables an optimization that transforms the data layout of a single dimensional array to provide better cache locality by analysing the access patterns.
This option enables an optimization that generates and calls specialized function versions when the loops inside function are vectorizable and the arguments are not aliased with each other. This optimization helps in function inlining and vectorization.
Sets the compiler's inlining threshold level to the value passed as the argument. The inline threshold is used in the inliner heuristics to decide which functions should be inlined.
This option enables the GVN hoist pass, which is used to hoist computations from branches.
This option enables an optimization that does the slp vectorization across basic blocks. The SLP vectorizer vectorizes instructions within basic blocks. The global slp vectorizer analyzes instructions across basic blocks and vectorizes them.
This option enables an optimization that generates and calls specialized function versions when they are called with constant arguments. This optimization helps in function inlining.
Enables estimation of the virtual register pressure before performing loop invariant code motion. This estimation is used to decide the invariants that will be hoisted during loop invariant code motion.
This option eliminates the array computations based on their usage. The computations on unused array elements and computations on zero valued array elements are eliminated with this optimization. -flto as whole program analysis is required to perform this optimization.
Possible values:
Sets the limit at which loops will be unrolled. For example, if unroll threshold is set to 100 then only loops with 100 or fewer instructions will be unrolled.
Sets the limit at which loops will be unswitched. For example, if unswitch threshold is set to 100 then only loops with 100 or fewer instructions will be unswtched.
Sets the compiler's inlining heuristics to an aggressive level by increasing the inline thresholds.
Run the loop rerolling pass.
This option enables aggressive loop unswitching heuristic (including -enable-partial-unswitch) based on the usage of the branch conditional values. Loop unswitching leads to code-bloat. Code-bloat can be minimized if the hoisted condition is executed more often. This heuristic prioritizes the conditions based on the number of times they are used within the loop. The heuristic can be controlled with the following options:
Enables unswitching of a loop with respect to a branch conditional value (B), where B appears in at least <n> compares in the loop. This option is enabled with -aggressive-loop-unswitch. Default value is 3.
Usage: -mllvm -aggressive-loop-unswitch -mllvm -unswitch-identical-branches-min-count=<n> where n is a positive integer and lower value of <n> facilitates more unswitching.
Enables unswitching of a loop with respect to a branch conditional value (B), where B appears in at most <n> compares in the loop. This option is enabled with -aggressive-loop-unswitch. Default value is 6.
Usage: -mllvm -aggressive-loop-unswitch -mllvm -unswitch-identical-branches-max-count=<n> where n is a positive integer and higher value of <n> facilitates more unswitching.
Note: These options may facilitate more unswitching in some of the workloads. Since loop-unswitching inherently leads to code bloat, facilitating more unswitching may significantly increase the code size and hence may also lead to longer compilation times.
Run cleanup optimization passes after vectorization.
Converts the call to floating point exponent version of pow to its integer exponent version if the floating-point exponent can be converted to integer. This option is set to true by default.
Allocate local variables on the stack, thus allowing recursion. SAVEd, data-initialized, or namelist members are always allocated statically, regardless of the setting of this switch.
Instructs the compiler to link with AMD-supported optimized math library.
Use the jemalloc library, which is a general purpose malloc(3) implementation that emphasizes fragmentation avoidance and scalable concurrency support.
Instructs the compiler to link with flang Fortran runtime libraries.
Do not warn about unused command line arguments.
Do not warn about unused command line arguments.
Do not warn about unused command line arguments.
Do not warn about unused command line arguments.
Do not warn about unused command line arguments.
Do not warn about unused command line arguments.
Do not warn about unused command line arguments.
Do not warn about unused command line arguments.
Do not warn about unused command line arguments.
Do not warn about unused command line arguments.
Do not warn about unused command line arguments.
Do not warn about unused command line arguments.
This section contains descriptions of flags that were included implicitly by other flags, but which do not have a permanent home at SPEC.
Somewhere between -O0 and -O2.
If multiple "O" options are used, with or without level numbers, the last such option is the one that is effective.
This optimization does partial unswitching of loops where some part of the unswitched control flow remains in the loop.
Using numactl
to bind processes and memory to cores
For multi-copy runs or single copy runs on systems with multiple sockets, it is advantageous to bind a process to a
particular core. Otherwise, the OS may arbitrarily move your process from one core to another. This can affect
performance. To help, SPEC allows the use of a "submit" command where users can specify a utility to use to bind
processes. We have found the utility 'numactl
' to be the best choice.
numactl
runs processes with a specific NUMA scheduling or memory placement policy. The policy is set for a
command and inherited by all of its children. The numactl
flag "--physcpubind
" specifies
which core(s) to bind the process. "-l
" instructs numactl
to keep a process's memory on the
local node while "-m
" specifies which node(s) to place a process's memory. For full details on using
numactl
, please refer to your Linux documentation, 'man numactl
'
Note that some older versions of numactl
incorrectly interpret application arguments as its own. For
example, with the command "numactl --physcpubind=0 -l a.out -m a
", numactl
will interpret
a.out
's "-m
" option as its own "-m
" option. To work around this problem, we put
the command to be run in a shell script and then run the shell script using numactl
. For example:
"echo 'a.out -m a' > run.sh ; numactl --physcpubind=0 bash run.sh
"
Transparent Huge Pages (THP)
THP is an abstraction layer that automates most aspects of creating, managing, and using huge pages. It is designed to hide much of the complexity in using huge pages from system administrators and developers. Huge pages increase the memory page size from 4 kilobytes to 2 megabytes. This provides significant performance advantages on systems with highly contended resources and large memory workloads. If memory utilization is too high or memory is badly fragmented which prevents huge pages being allocated, the kernel will assign smaller 4k pages instead. Most recent Linux OS releases have THP enabled by default.
THP usage is controlled by the sysfs setting /sys/kernel/mm/transparent_hugepage/enabled
.
Possible values:
The SPEC CPU benchmark codes themselves never explicitly request huge pages, as the mechanism to do that is OS-specific and can change over time. Libraries such as jemalloc which are used by the benchmarks may explicitly request huge pages, and use of such libraries can make the "madvise" setting relevant and useful.
When no huge pages are immediately available and one is requested, how the system handles the request for THP creation is
controlled by the sysfs setting /sys/kernel/mm/transparent_hugepage/defrag
.
Possible values:
An application that "always" requests THP often can benefit from waiting for an allocation until those huge pages can be assembled.
For more information see the Linux transparent hugepage documentation.
ulimit -s <n>
Sets the stack size to n kbytes, or unlimited to allow the stack size to grow without limit.
ulimit -l <n>
Sets the maximum size of memory that may be locked into physical memory.
powersave -f
(on SuSE)
Makes the powersave daemon set the CPUs to the highest supported frequency.
/etc/init.d/cpuspeed stop
(on Red Hat)
Disables the cpu frequency scaling program in order to set the CPUs to the highest supported frequency.
LD_LIBRARY_PATH
An environment variable that indicates the location in the filesystem of bundled libraries to use when running the benchmark binaries.
kernel/numa_balancing
This OS setting controls automatic NUMA balancing on memory mapping and process placement. NUMA balancing incurs overhead for no benefit on workloads that are already bound to NUMA nodes.
Possible settings:
For more information see the numa_balancing
entry in the
Linux sysctl documentation.
kernel/randomize_va_space
(ASLR)
This setting can be used to select the type of process address space randomization. Defaults differ based on whether the architecture supports ASLR, whether the kernel was built with the CONFIG_COMPAT_BRK option or not, or the kernel boot options used.
Possible settings:
norandmaps
" parameter.CONFIG_COMPAT_BRK
option is enabled at kernel build time.CONFIG_COMPAT_BRK
is
disabled.
Disabling ASLR can make process execution more deterministic and runtimes more consistent.
For more information see the randomize_va_space
entry in the
Linux sysctl documentation.
MALLOC_CONF
The jemalloc library has tunable parameters, many of which may be changed at run-time via several mechanisms, one of which
is the MALLOC_CONF
environment variable. Other methods, as well as the order in which they're referenced,
are detailed in the jemalloc documentation's TUNING section.
The options that can be tuned at run-time are everything in the jemalloc documentation's
MALLCTL NAMESPACE section that begins with
"opt.
".
The options that may be encountered in SPEC CPU 2017 results are detailed here:
retain:true
- Causes unused virtual memory to
be retained for later reuse rather than discarding it. This is the default for 64-bit Linux.thp:never
- Attempts to never utilize huge pages
by using MADV_NOHUGEPAGE
on all mappings. This option has no effect except when THP is set to
"madvise".PGHPF_ZMEM
An environment variable used to initialize the allocated memory. Setting PGHPF_ZMEM to "Yes" has the effect of initializing all allocated memory to zero.
GOMP_CPU_AFFINITY
This environment variable is used to set the thread affinity for threads spawned by OpenMP.
OMP_DYNAMIC
This environment variable is defined as part of the OpenMP standard. Setting it to "false" prevents the OpenMP runtime from dynamically adjusting the number of threads to use for parallel execution.
For more information, see chapter 4 ("Environment Variables") in the OpenMP 4.5 Specification.
OMP_SCHEDULE
This environment variable is defined as part of the OpenMP standard. Setting it to "static" causes loop iterations to be assigned to threads in round-robin fashion in the order of the thread number.
For more information, see chapter 4 ("Environment Variables") in the OpenMP 4.5 Specification.
OMP_STACKSIZE
This environment variable is defined as part of the OpenMP standard and controls the size of the stack for threads created by OpenMP.
For more information, see chapter 4 ("Environment Variables") in the OpenMP 4.5 Specification.
OMP_THREAD_LIMIT
This environment variable is defined as part of the OpenMP standard and limits the maximum number of OpenMP threads that can be created.
For more information, see chapter 4 ("Environment Variables") in the OpenMP 4.5 Specification.
LIBOMP_NUM_HIDDEN_HELPER_THREADS
target nowait
is supported via hidden helper task, which is a task not bound to any parallel region.
A hidden helper team with a number of threads is created when the first hidden helper task is encountered.
The number of threads can be configured via the environment variable LIBOMP_NUM_HIDDEN_HELPER_THREADS
. The
default is 8. If LIBOMP_NUM_HIDDEN_HELPER_THREADS
is 0, the hidden helper task is disabled and support
falls back to a regular OpenMP task. The hidden helper task can also be disabled by setting the environment variable
LIBOMP_USE_HIDDEN_HELPER_TASK=OFF
.
Model | Minimum TDP | Maximum TDP |
---|---|---|
EPYC 7763 | 225 | 280 |
EPYC 7713 | 225 | 240 |
EPYC 7713P | 225 | 240 |
EPYC 7663 | 225 | 240 |
EPYC 7643 | 225 | 240 |
EPYC 75F3 | 225 | 280 |
EPYC 7543 | 225 | 240 |
EPYC 7543P | 225 | 240 |
EPYC 7513 | 165 | 200 |
EPYC 7453 | 225 | 240 |
EPYC 74F3 | 225 | 240 |
EPYC 7443 | 165 | 200 |
EPYC 7443P | 165 | 200 |
EPYC 7413 | 165 | 200 |
EPYC 73F3 | 225 | 240 |
EPYC 7343 | 165 | 200 |
EPYC 7313 | 155 | 180 |
EPYC 7313P | 155 | 180 |
EPYC 72F3 | 165 | 200 |
Flag description origin markings:
For questions about the meanings of these flags, please contact the tester.
For other inquiries, please contact info@spec.org
Copyright 2017-2023 Standard Performance Evaluation Corporation
Tested with SPEC CPU2017 v1.1.9.
Report generated on 2023-10-11 12:33:42 by SPEC CPU2017 flags formatter v5178.