clang is a C compiler which encompasses preprocessing, parsing, optimization, code generation, assembly, and linking. Depending on which high-level mode setting is passed, Clang will stop before doing a full link.
flang is a Fortran compiler which encompasses parsing, optimization, code generation, assembly, and linking. Depending on which high-level mode setting is passed, Flang will stop before doing a full link.
flang is a Fortran compiler which encompasses parsing, optimization, code generation, assembly, and linking. Depending on which high-level mode setting is passed, Flang will stop before doing a full link.
clang is a C compiler which encompasses preprocessing, parsing, optimization, code generation, assembly, and linking. Depending on which high-level mode setting is passed, Clang will stop before doing a full link.
clang++ C++ compiler which encompasses preprocessing, parsing, optimization, code generation, assembly, and linking. Depending on which high-level mode setting is passed, Clang will stop before doing a full link.
clang is a C compiler which encompasses preprocessing, parsing, optimization, code generation, assembly, and linking. Depending on which high-level mode setting is passed, Clang will stop before doing a full link.
flang is a Fortran compiler which encompasses parsing, optimization, code generation, assembly, and linking. Depending on which high-level mode setting is passed, Flang will stop before doing a full link.
clang is a C compiler which encompasses preprocessing, parsing, optimization, code generation, assembly, and linking. Depending on which high-level mode setting is passed, Clang will stop before doing a full link.
flang is a Fortran compiler which encompasses parsing, optimization, code generation, assembly, and linking. Depending on which high-level mode setting is passed, Flang will stop before doing a full link.
flang is a Fortran compiler which encompasses parsing, optimization, code generation, assembly, and linking. Depending on which high-level mode setting is passed, Flang will stop before doing a full link.
clang is a C compiler which encompasses preprocessing, parsing, optimization, code generation, assembly, and linking. Depending on which high-level mode setting is passed, Clang will stop before doing a full link.
clang++ C++ compiler which encompasses preprocessing, parsing, optimization, code generation, assembly, and linking. Depending on which high-level mode setting is passed, Clang will stop before doing a full link.
clang is a C compiler which encompasses preprocessing, parsing, optimization, code generation, assembly, and linking. Depending on which high-level mode setting is passed, Clang will stop before doing a full link.
flang is a Fortran compiler which encompasses parsing, optimization, code generation, assembly, and linking. Depending on which high-level mode setting is passed, Flang will stop before doing a full link.
This option is used to indicate that the host system's integers are 32-bits wide, and longs and pointers are 64-bits wide. Not all benchmarks recognize this macro, but the preferred practice for data model selection applies the flags to all benchmarks; this flag description is a placeholder for those benchmarks that do not recognize this macro.
This option is used to indicate that the host system's integers are 32-bits wide, and longs and pointers are 64-bits wide. Not all benchmarks recognize this macro, but the preferred practice for data model selection applies the flags to all benchmarks; this flag description is a placeholder for those benchmarks that do not recognize this macro.
This option is used to indicate that the host system's integers are 32-bits wide, and longs and pointers are 64-bits wide. Not all benchmarks recognize this macro, but the preferred practice for data model selection applies the flags to all benchmarks; this flag description is a placeholder for those benchmarks that do not recognize this macro.
This macro indicates that Fortran functions called from C should have their names lower-cased.
The binary datasets for some of the Fortran benchmarks in the SPEC CPU suites are stored in big-endian format. This option is necessary for those datasets to be read in correctly.
This option is used to indicate that the host system's integers are 32-bits wide, and longs and pointers are 64-bits wide. Not all benchmarks recognize this macro, but the preferred practice for data model selection applies the flags to all benchmarks; this flag description is a placeholder for those benchmarks that do not recognize this macro.
Fortran to C symbol naming. C symbol names are lower case with one underscore. _symbol
This option is used to indicate that the host system's integers are 32-bits wide, and longs and pointers are 64-bits wide. Not all benchmarks recognize this macro, but the preferred practice for data model selection applies the flags to all benchmarks; this flag description is a placeholder for those benchmarks that do not recognize this macro.
For netcdf, if defined uses Fortran symbol names ABC as abc_
The binary datasets for some of the Fortran benchmarks in the SPEC CPU suites are stored in big-endian format. This option is necessary for those datasets to be read in correctly.
This option is used to indicate that the host system's integers are 32-bits wide, and longs and pointers are 64-bits wide. Not all benchmarks recognize this macro, but the preferred practice for data model selection applies the flags to all benchmarks; this flag description is a placeholder for those benchmarks that do not recognize this macro.
This option is used to indicate that the host system's integers are 32-bits wide, and longs and pointers are 64-bits wide. Not all benchmarks recognize this macro, but the preferred practice for data model selection applies the flags to all benchmarks; this flag description is a placeholder for those benchmarks that do not recognize this macro.
This option is used to indicate that the host system's integers are 32-bits wide, and longs and pointers are 64-bits wide. Not all benchmarks recognize this macro, but the preferred practice for data model selection applies the flags to all benchmarks; this flag description is a placeholder for those benchmarks that do not recognize this macro.
This option is used to indicate that the host system's integers are 32-bits wide, and longs and pointers are 64-bits wide. Not all benchmarks recognize this macro, but the preferred practice for data model selection applies the flags to all benchmarks; this flag description is a placeholder for those benchmarks that do not recognize this macro.
This option is used to indicate that the host system's integers are 32-bits wide, and longs and pointers are 64-bits wide. Not all benchmarks recognize this macro, but the preferred practice for data model selection applies the flags to all benchmarks; this flag description is a placeholder for those benchmarks that do not recognize this macro.
This option is used to indicate that the host system's integers are 32-bits wide, and longs and pointers are 64-bits wide. Not all benchmarks recognize this macro, but the preferred practice for data model selection applies the flags to all benchmarks; this flag description is a placeholder for those benchmarks that do not recognize this macro.
This option is used to indicate that the host system's integers are 32-bits wide, and longs and pointers are 64-bits wide. Not all benchmarks recognize this macro, but the preferred practice for data model selection applies the flags to all benchmarks; this flag description is a placeholder for those benchmarks that do not recognize this macro.
This option is used to indicate that the host system's integers are 32-bits wide, and longs and pointers are 64-bits wide. Not all benchmarks recognize this macro, but the preferred practice for data model selection applies the flags to all benchmarks; this flag description is a placeholder for those benchmarks that do not recognize this macro.
This macro indicates that Fortran functions called from C should have their names lower-cased.
The binary datasets for some of the Fortran benchmarks in the SPEC CPU suites are stored in big-endian format. This option is necessary for those datasets to be read in correctly.
This option is used to indicate that the host system's integers are 32-bits wide, and longs and pointers are 64-bits wide. Not all benchmarks recognize this macro, but the preferred practice for data model selection applies the flags to all benchmarks; this flag description is a placeholder for those benchmarks that do not recognize this macro.
Fortran to C symbol naming. C symbol names are lower case with one underscore. _symbol
This option is used to indicate that the host system's integers are 32-bits wide, and longs and pointers are 64-bits wide. Not all benchmarks recognize this macro, but the preferred practice for data model selection applies the flags to all benchmarks; this flag description is a placeholder for those benchmarks that do not recognize this macro.
For netcdf, if defined uses Fortran symbol names ABC as abc_
The binary datasets for some of the Fortran benchmarks in the SPEC CPU suites are stored in big-endian format. This option is necessary for those datasets to be read in correctly.
This option is used to indicate that the host system's integers are 32-bits wide, and longs and pointers are 64-bits wide. Not all benchmarks recognize this macro, but the preferred practice for data model selection applies the flags to all benchmarks; this flag description is a placeholder for those benchmarks that do not recognize this macro.
This option is used to indicate that the host system's integers are 32-bits wide, and longs and pointers are 64-bits wide. Not all benchmarks recognize this macro, but the preferred practice for data model selection applies the flags to all benchmarks; this flag description is a placeholder for those benchmarks that do not recognize this macro.
This option is used to indicate that the host system's integers are 32-bits wide, and longs and pointers are 64-bits wide. Not all benchmarks recognize this macro, but the preferred practice for data model selection applies the flags to all benchmarks; this flag description is a placeholder for those benchmarks that do not recognize this macro.
This option is used to indicate that the host system's integers are 32-bits wide, and longs and pointers are 64-bits wide. Not all benchmarks recognize this macro, but the preferred practice for data model selection applies the flags to all benchmarks; this flag description is a placeholder for those benchmarks that do not recognize this macro.
This option is used to indicate that the host system's integers are 32-bits wide, and longs and pointers are 64-bits wide. Not all benchmarks recognize this macro, but the preferred practice for data model selection applies the flags to all benchmarks; this flag description is a placeholder for those benchmarks that do not recognize this macro.
Generate output files in LLVM formats suitable for link time optimization. When used with -S this generates LLVM intermediate language assembly files, otherwise this generates LLVM bitcode format object files (which may be passed to the linker depending on the stage selection options).
This option enables an optimization that generates and calls specialized function versions when they are called with constant arguments. This optimization helps in function inlining.
This flag enables vectorization of loops with complex control flow that can not be vectorized by loop and slp vectorizers.
This option instructs the compiler to use vector math library.
This option eliminates the array computations based on their usage. The computations on unused array elements and computations on zero valued array elements are eliminated with this optimization. -flto as whole program analysis is required to perform this optimization.
Possible values:
Like -O2, except that it enables optimizations that take longer to perform or that may generate larger code (in an attempt to make the program run faster).
If multiple "O" options are used, with or without level numbers, the last such option is the one that is effective.
Enables a range of optimizations that provide faster, though sometimes less precise, mathematical operations that may not conform to the IEEE-754 specifications. When this option is specified, the __STDC_IEC_559__ macro is ignored even if set by the system headers.
Specify that Clang should generate code for a specific processor family member and later. For example, if you specify -march=znver1, the compiler is allowed to generate instructions that are valid on AMD Zen processors, but which may not exist on earlier products.
This option enables transformation of the layout of arrays of structure types and their fields to improve the cache locality. Aggressive analysis and transformations are performed at higher levels. This option is effective only with -flto as whole program analysis is required to perform this optimization.
Possible values:
Sets the limit at which loops will be unrolled. For example, if unroll threshold is set to 100 then only loops with 100 or fewer instructions will be unrolled.
This option enables an optimization that transforms the data layout of a single dimensional array to provide better cache locality by analysing the access patterns.
This option enables an optimization that generates and calls specialized function versions when they are called with constant arguments. This optimization helps in function inlining.
This option enables the GVN hoist pass, which is used to hoist computations from branches.
This option eliminates the array computations based on their usage. The computations on unused array elements and computations on zero valued array elements are eliminated with this optimization. -flto as whole program analysis is required to perform this optimization.
Possible values:
This option enables an optimization that does the slp vectorization across basic blocks. The SLP vectorizer vectorizes instructions within basic blocks. The global slp vectorizer analyzes instructions across basic blocks and vectorizes them.
This option instructs the compiler to use vector math library.
Sets the compiler's inlining threshold level to the value passed. The inline threshold is used in the inliner heuristics to decide which functions should be inlined.
This option enables an optimization that generates and calls specialized function versions when the loops inside function are vectorizable and the arguments are not aliased with each other. This optimization helps in function inlining and vectorization.
Instructs the linker to use the first definition encountered for a symbol, and ignore all others.
Definition of this macro indicates that compilation for parallel operation is enabled, and that any OpenMP directives or pragmas will be visible to the compiler. The behavior of this macro is overridden if -DSPEC_SUPPRESS_OPENMP also appears in the list of compilation flags.
Enable handling of OpenMP directives and generate parallel code. The openmp library to be linked can be specified through -fopenmp=library option.
Switch to enable OpenMP.
Enable handling of OpenMP directives and generate parallel code. The openmp library to be linked can be specified through -fopenmp=library option.
Instructs the compiler to link with the OpenMP runtime libraries.
This option instructs the linker to link the executable with the pthread library.
This option instructs the linker to link the executable with libdl, the interface to the dynamic loader.
Instructs the compiler to link with system vector math libraries.
Instructs the compiler to link with AMD-supported optimized math library.
Use the jemalloc library, which is a general purpose malloc(3) implementation that emphasizes fragmentation avoidance and scalable concurrency support.
Instructs the compiler to link with flang Fortran runtime libraries.
Generate output files in LLVM formats suitable for link time optimization. When used with -S this generates LLVM intermediate language assembly files, otherwise this generates LLVM bitcode format object files (which may be passed to the linker depending on the stage selection options).
This option enables an optimization that generates and calls specialized function versions when they are called with constant arguments. This optimization helps in function inlining.
This flag enables vectorization of loops with complex control flow that can not be vectorized by loop and slp vectorizers.
This option instructs the compiler to use vector math library.
This option eliminates the array computations based on their usage. The computations on unused array elements and computations on zero valued array elements are eliminated with this optimization. -flto as whole program analysis is required to perform this optimization.
Possible values:
Like -O2, except that it enables optimizations that take longer to perform or that may generate larger code (in an attempt to make the program run faster).
If multiple "O" options are used, with or without level numbers, the last such option is the one that is effective.
Specify that Clang should generate code for a specific processor family member and later. For example, if you specify -march=znver1, the compiler is allowed to generate instructions that are valid on AMD Zen processors, but which may not exist on earlier products.
This option instructs the compiler to unroll loops wherever possible.
Allocate local variables on the stack, thus allowing recursion. SAVEd, data-initialized, or namelist members are always allocated statically, regardless of the setting of this switch.
This option instructs the compiler to use vector math library.
Instructs the linker to use the first definition encountered for a symbol, and ignore all others.
Instructs the compiler to conform to the IEEE-754 specifications. The compiler will Perform floating-point operations in strict conformance with the IEEE 754 standard. Some optimizations are disabled when this option is specified
Do not allow optimizations for floating-point arithmetic that assume that arguments and results are not NaNs or +-Infs. The option instructs the compiler to follow exact implementation of IEEE or ISO rules/specifications for math functions.
Definition of this macro indicates that compilation for parallel operation is enabled, and that any OpenMP directives or pragmas will be visible to the compiler. The behavior of this macro is overridden if -DSPEC_SUPPRESS_OPENMP also appears in the list of compilation flags.
Enable handling of OpenMP directives and generate parallel code. The openmp library to be linked can be specified through -fopenmp=library option.
Switch to enable OpenMP.
Enable handling of OpenMP directives and generate parallel code. The openmp library to be linked can be specified through -fopenmp=library option.
Instructs the compiler to link with the OpenMP runtime libraries.
This option instructs the linker to link the executable with the pthread library.
This option instructs the linker to link the executable with libdl, the interface to the dynamic loader.
Instructs the compiler to link with system vector math libraries.
Instructs the compiler to link with AMD-supported optimized math library.
Use the jemalloc library, which is a general purpose malloc(3) implementation that emphasizes fragmentation avoidance and scalable concurrency support.
Instructs the compiler to link with flang Fortran runtime libraries.
Generate output files in LLVM formats suitable for link time optimization. When used with -S this generates LLVM intermediate language assembly files, otherwise this generates LLVM bitcode format object files (which may be passed to the linker depending on the stage selection options).
This option enables an optimization that generates and calls specialized function versions when they are called with constant arguments. This optimization helps in function inlining.
This flag enables vectorization of loops with complex control flow that can not be vectorized by loop and slp vectorizers.
This option instructs the compiler to use vector math library.
This option eliminates the array computations based on their usage. The computations on unused array elements and computations on zero valued array elements are eliminated with this optimization. -flto as whole program analysis is required to perform this optimization.
Possible values:
Like -O2, except that it enables optimizations that take longer to perform or that may generate larger code (in an attempt to make the program run faster).
If multiple "O" options are used, with or without level numbers, the last such option is the one that is effective.
Enables a range of optimizations that provide faster, though sometimes less precise, mathematical operations that may not conform to the IEEE-754 specifications. When this option is specified, the __STDC_IEC_559__ macro is ignored even if set by the system headers.
Specify that Clang should generate code for a specific processor family member and later. For example, if you specify -march=znver1, the compiler is allowed to generate instructions that are valid on AMD Zen processors, but which may not exist on earlier products.
This option enables transformation of the layout of arrays of structure types and their fields to improve the cache locality. Aggressive analysis and transformations are performed at higher levels. This option is effective only with -flto as whole program analysis is required to perform this optimization.
Possible values:
Sets the limit at which loops will be unrolled. For example, if unroll threshold is set to 100 then only loops with 100 or fewer instructions will be unrolled.
This option enables an optimization that transforms the data layout of a single dimensional array to provide better cache locality by analysing the access patterns.
This option enables an optimization that generates and calls specialized function versions when they are called with constant arguments. This optimization helps in function inlining.
This option enables the GVN hoist pass, which is used to hoist computations from branches.
This option eliminates the array computations based on their usage. The computations on unused array elements and computations on zero valued array elements are eliminated with this optimization. -flto as whole program analysis is required to perform this optimization.
Possible values:
This option enables an optimization that does the slp vectorization across basic blocks. The SLP vectorizer vectorizes instructions within basic blocks. The global slp vectorizer analyzes instructions across basic blocks and vectorizes them.
This option instructs the compiler to use vector math library.
Sets the compiler's inlining threshold level to the value passed. The inline threshold is used in the inliner heuristics to decide which functions should be inlined.
This option enables an optimization that generates and calls specialized function versions when the loops inside function are vectorizable and the arguments are not aliased with each other. This optimization helps in function inlining and vectorization.
This option instructs the compiler to unroll loops wherever possible.
Allocate local variables on the stack, thus allowing recursion. SAVEd, data-initialized, or namelist members are always allocated statically, regardless of the setting of this switch.
Instructs the linker to use the first definition encountered for a symbol, and ignore all others.
Instructs the compiler to conform to the IEEE-754 specifications. The compiler will Perform floating-point operations in strict conformance with the IEEE 754 standard. Some optimizations are disabled when this option is specified
Do not allow optimizations for floating-point arithmetic that assume that arguments and results are not NaNs or +-Infs. The option instructs the compiler to follow exact implementation of IEEE or ISO rules/specifications for math functions.
Definition of this macro indicates that compilation for parallel operation is enabled, and that any OpenMP directives or pragmas will be visible to the compiler. The behavior of this macro is overridden if -DSPEC_SUPPRESS_OPENMP also appears in the list of compilation flags.
Enable handling of OpenMP directives and generate parallel code. The openmp library to be linked can be specified through -fopenmp=library option.
Switch to enable OpenMP.
Enable handling of OpenMP directives and generate parallel code. The openmp library to be linked can be specified through -fopenmp=library option.
Instructs the compiler to link with the OpenMP runtime libraries.
This option instructs the linker to link the executable with the pthread library.
This option instructs the linker to link the executable with libdl, the interface to the dynamic loader.
Instructs the compiler to link with system vector math libraries.
Instructs the compiler to link with AMD-supported optimized math library.
Use the jemalloc library, which is a general purpose malloc(3) implementation that emphasizes fragmentation avoidance and scalable concurrency support.
Instructs the compiler to link with flang Fortran runtime libraries.
Selects the C++ language dialect.
Generate output files in LLVM formats suitable for link time optimization. When used with -S this generates LLVM intermediate language assembly files, otherwise this generates LLVM bitcode format object files (which may be passed to the linker depending on the stage selection options).
This option enables an optimization that generates and calls specialized function versions when they are called with constant arguments. This optimization helps in function inlining.
This flag enables vectorization of loops with complex control flow that can not be vectorized by loop and slp vectorizers.
This option instructs the compiler to use vector math library.
This option eliminates the array computations based on their usage. The computations on unused array elements and computations on zero valued array elements are eliminated with this optimization. -flto as whole program analysis is required to perform this optimization.
Possible values:
Disable generation of fma instructions when there is a chain of fma instructions and output of one fma instruction is used as input to other fma instruction.
Like -O2, except that it enables optimizations that take longer to perform or that may generate larger code (in an attempt to make the program run faster).
If multiple "O" options are used, with or without level numbers, the last such option is the one that is effective.
Enables a range of optimizations that provide faster, though sometimes less precise, mathematical operations that may not conform to the IEEE-754 specifications. When this option is specified, the __STDC_IEC_559__ macro is ignored even if set by the system headers.
Specify that Clang should generate code for a specific processor family member and later. For example, if you specify -march=znver1, the compiler is allowed to generate instructions that are valid on AMD Zen processors, but which may not exist on earlier products.
This option enables transformation of the layout of arrays of structure types and their fields to improve the cache locality. Aggressive analysis and transformations are performed at higher levels. This option is effective only with -flto as whole program analysis is required to perform this optimization.
Possible values:
Sets the limit at which loops will be unrolled. For example, if unroll threshold is set to 100 then only loops with 100 or fewer instructions will be unrolled.
This option enables an optimization that transforms the data layout of a single dimensional array to provide better cache locality by analysing the access patterns.
This option enables an optimization that generates and calls specialized function versions when they are called with constant arguments. This optimization helps in function inlining.
This option enables the GVN hoist pass, which is used to hoist computations from branches.
This option eliminates the array computations based on their usage. The computations on unused array elements and computations on zero valued array elements are eliminated with this optimization. -flto as whole program analysis is required to perform this optimization.
Possible values:
This option enables an optimization that does the slp vectorization across basic blocks. The SLP vectorizer vectorizes instructions within basic blocks. The global slp vectorizer analyzes instructions across basic blocks and vectorizes them.
This option instructs the compiler to use vector math library.
Sets the compiler's inlining threshold level to the value passed. The inline threshold is used in the inliner heuristics to decide which functions should be inlined.
This option enables an optimization that generates and calls specialized function versions when the loops inside function are vectorizable and the arguments are not aliased with each other. This optimization helps in function inlining and vectorization.
Sets the limit at which loops will be unswitched. For example, if unswitch threshold is set to 100 then only loops with 100 or fewer instructions will be unswtched.
Sets the limit at which loops will be unrolled. For example, if unroll threshold is set to 100 then only loops with 100 or fewer instructions will be unrolled.
This optimization does partial unswitching of loops where some part of the unswitched control flow remains in the loop.
This option instructs the compiler to unroll loops wherever possible.
Allocate local variables on the stack, thus allowing recursion. SAVEd, data-initialized, or namelist members are always allocated statically, regardless of the setting of this switch.
Instructs the linker to use the first definition encountered for a symbol, and ignore all others.
Instructs the compiler to conform to the IEEE-754 specifications. The compiler will Perform floating-point operations in strict conformance with the IEEE 754 standard. Some optimizations are disabled when this option is specified
Do not allow optimizations for floating-point arithmetic that assume that arguments and results are not NaNs or +-Infs. The option instructs the compiler to follow exact implementation of IEEE or ISO rules/specifications for math functions.
Definition of this macro indicates that compilation for parallel operation is enabled, and that any OpenMP directives or pragmas will be visible to the compiler. The behavior of this macro is overridden if -DSPEC_SUPPRESS_OPENMP also appears in the list of compilation flags.
Enable handling of OpenMP directives and generate parallel code. The openmp library to be linked can be specified through -fopenmp=library option.
Switch to enable OpenMP.
Enable handling of OpenMP directives and generate parallel code. The openmp library to be linked can be specified through -fopenmp=library option.
Instructs the compiler to link with the OpenMP runtime libraries.
This option instructs the linker to link the executable with the pthread library.
This option instructs the linker to link the executable with libdl, the interface to the dynamic loader.
Instructs the compiler to link with system vector math libraries.
Instructs the compiler to link with AMD-supported optimized math library.
Use the jemalloc library, which is a general purpose malloc(3) implementation that emphasizes fragmentation avoidance and scalable concurrency support.
Instructs the compiler to link with flang Fortran runtime libraries.
Generate output files in LLVM formats suitable for link time optimization. When used with -S this generates LLVM intermediate language assembly files, otherwise this generates LLVM bitcode format object files (which may be passed to the linker depending on the stage selection options).
This option enables an optimization that generates and calls specialized function versions when they are called with constant arguments. This optimization helps in function inlining.
This flag enables vectorization of loops with complex control flow that can not be vectorized by loop and slp vectorizers.
This option instructs the compiler to use vector math library.
This option eliminates the array computations based on their usage. The computations on unused array elements and computations on zero valued array elements are eliminated with this optimization. -flto as whole program analysis is required to perform this optimization.
Possible values:
Enables all the optimizations from -O3 along with other aggressive optimizations that may violate strict compliance with language standards. Refer to the AOCC options document for the language you're using for more detailed documentation of optimizations enabled under -Ofast.
Specify that Clang should generate code for a specific processor family member and later. For example, if you specify -march=znver1, the compiler is allowed to generate instructions that are valid on AMD Zen processors, but which may not exist on earlier products.
This option disables the generation of SSE4a instructions.
This option enables transformation of the layout of arrays of structure types and their fields to improve the cache locality. Aggressive analysis and transformations are performed at higher levels. This option is effective only with -flto as whole program analysis is required to perform this optimization.
Possible values:
This option avoids runtime memory dependency checks to enable aggressive vectorization.
This option enables an optimization that generates and calls specialized function versions when they are called with constant arguments. This optimization helps in function inlining.
This option enables the GVN hoist pass, which is used to hoist computations from branches.
Sets the limit at which loops will be unrolled. For example, if unroll threshold is set to 100 then only loops with 100 or fewer instructions will be unrolled.
This option enables an optimization that transforms the data layout of a single dimensional array to provide better cache locality by analysing the access patterns.
This option instructs the compiler to use vector math library.
This option eliminates the array computations based on their usage. The computations on unused array elements and computations on zero valued array elements are eliminated with this optimization. -flto as whole program analysis is required to perform this optimization.
Possible values:
This option enables an optimization that does the slp vectorization across basic blocks. The SLP vectorizer vectorizes instructions within basic blocks. The global slp vectorizer analyzes instructions across basic blocks and vectorizes them.
Sets the compiler's inlining threshold level to the value passed. The inline threshold is used in the inliner heuristics to decide which functions should be inlined.
This option enables an optimization that generates and calls specialized function versions when the loops inside function are vectorizable and the arguments are not aliased with each other. This optimization helps in function inlining and vectorization.
Definition of this macro indicates that compilation for parallel operation is enabled, and that any OpenMP directives or pragmas will be visible to the compiler. The behavior of this macro is overridden if -DSPEC_SUPPRESS_OPENMP also appears in the list of compilation flags.
Enable handling of OpenMP directives and generate parallel code. The openmp library to be linked can be specified through -fopenmp=library option.
Switch to enable OpenMP.
Instructs the compiler to link with system vector math libraries.
Instructs the compiler to link with AMD-supported optimized math library.
Enable handling of OpenMP directives and generate parallel code. The openmp library to be linked can be specified through -fopenmp=library option.
Instructs the compiler to link with the OpenMP runtime libraries.
This option instructs the linker to link the executable with the pthread library.
This option instructs the linker to link the executable with libdl, the interface to the dynamic loader.
Use the jemalloc library, which is a general purpose malloc(3) implementation that emphasizes fragmentation avoidance and scalable concurrency support.
Instructs the compiler to link with flang Fortran runtime libraries.
Generate output files in LLVM formats suitable for link time optimization. When used with -S this generates LLVM intermediate language assembly files, otherwise this generates LLVM bitcode format object files (which may be passed to the linker depending on the stage selection options).
This option enables an optimization that generates and calls specialized function versions when they are called with constant arguments. This optimization helps in function inlining.
This flag enables vectorization of loops with complex control flow that can not be vectorized by loop and slp vectorizers.
This option instructs the compiler to use vector math library.
This option eliminates the array computations based on their usage. The computations on unused array elements and computations on zero valued array elements are eliminated with this optimization. -flto as whole program analysis is required to perform this optimization.
Possible values:
Like -O2, except that it enables optimizations that take longer to perform or that may generate larger code (in an attempt to make the program run faster).
If multiple "O" options are used, with or without level numbers, the last such option is the one that is effective.
Specify that Clang should generate code for a specific processor family member and later. For example, if you specify -march=znver1, the compiler is allowed to generate instructions that are valid on AMD Zen processors, but which may not exist on earlier products.
This option instructs the compiler to unroll loops wherever possible.
Allocate local variables on the stack, thus allowing recursion. SAVEd, data-initialized, or namelist members are always allocated statically, regardless of the setting of this switch.
This option instructs the compiler to use vector math library.
Instructs the compiler to conform to the IEEE-754 specifications. The compiler will Perform floating-point operations in strict conformance with the IEEE 754 standard. Some optimizations are disabled when this option is specified
Do not allow optimizations for floating-point arithmetic that assume that arguments and results are not NaNs or +-Infs. The option instructs the compiler to follow exact implementation of IEEE or ISO rules/specifications for math functions.
Definition of this macro indicates that compilation for parallel operation is enabled, and that any OpenMP directives or pragmas will be visible to the compiler. The behavior of this macro is overridden if -DSPEC_SUPPRESS_OPENMP also appears in the list of compilation flags.
Enable handling of OpenMP directives and generate parallel code. The openmp library to be linked can be specified through -fopenmp=library option.
Switch to enable OpenMP.
Enable handling of OpenMP directives and generate parallel code. The openmp library to be linked can be specified through -fopenmp=library option.
Instructs the compiler to link with the OpenMP runtime libraries.
This option instructs the linker to link the executable with the pthread library.
This option instructs the linker to link the executable with libdl, the interface to the dynamic loader.
Instructs the compiler to link with system vector math libraries.
Instructs the compiler to link with AMD-supported optimized math library.
Use the jemalloc library, which is a general purpose malloc(3) implementation that emphasizes fragmentation avoidance and scalable concurrency support.
Instructs the compiler to link with flang Fortran runtime libraries.
Generate output files in LLVM formats suitable for link time optimization. When used with -S this generates LLVM intermediate language assembly files, otherwise this generates LLVM bitcode format object files (which may be passed to the linker depending on the stage selection options).
This option enables an optimization that generates and calls specialized function versions when they are called with constant arguments. This optimization helps in function inlining.
This flag enables vectorization of loops with complex control flow that can not be vectorized by loop and slp vectorizers.
This option instructs the compiler to use vector math library.
This option eliminates the array computations based on their usage. The computations on unused array elements and computations on zero valued array elements are eliminated with this optimization. -flto as whole program analysis is required to perform this optimization.
Possible values:
Enables all the optimizations from -O3 along with other aggressive optimizations that may violate strict compliance with language standards. Refer to the AOCC options document for the language you're using for more detailed documentation of optimizations enabled under -Ofast.
Specify that Clang should generate code for a specific processor family member and later. For example, if you specify -march=znver1, the compiler is allowed to generate instructions that are valid on AMD Zen processors, but which may not exist on earlier products.
This option disables the generation of SSE4a instructions.
This option enables transformation of the layout of arrays of structure types and their fields to improve the cache locality. Aggressive analysis and transformations are performed at higher levels. This option is effective only with -flto as whole program analysis is required to perform this optimization.
Possible values:
This option avoids runtime memory dependency checks to enable aggressive vectorization.
This option enables an optimization that generates and calls specialized function versions when they are called with constant arguments. This optimization helps in function inlining.
This option enables the GVN hoist pass, which is used to hoist computations from branches.
Sets the limit at which loops will be unrolled. For example, if unroll threshold is set to 100 then only loops with 100 or fewer instructions will be unrolled.
This option enables an optimization that transforms the data layout of a single dimensional array to provide better cache locality by analysing the access patterns.
This option instructs the compiler to use vector math library.
This option eliminates the array computations based on their usage. The computations on unused array elements and computations on zero valued array elements are eliminated with this optimization. -flto as whole program analysis is required to perform this optimization.
Possible values:
This option enables an optimization that does the slp vectorization across basic blocks. The SLP vectorizer vectorizes instructions within basic blocks. The global slp vectorizer analyzes instructions across basic blocks and vectorizes them.
Sets the compiler's inlining threshold level to the value passed. The inline threshold is used in the inliner heuristics to decide which functions should be inlined.
This option enables an optimization that generates and calls specialized function versions when the loops inside function are vectorizable and the arguments are not aliased with each other. This optimization helps in function inlining and vectorization.
Like -O2, except that it enables optimizations that take longer to perform or that may generate larger code (in an attempt to make the program run faster).
If multiple "O" options are used, with or without level numbers, the last such option is the one that is effective.
This option instructs the compiler to unroll loops wherever possible.
Allocate local variables on the stack, thus allowing recursion. SAVEd, data-initialized, or namelist members are always allocated statically, regardless of the setting of this switch.
Instructs the compiler to conform to the IEEE-754 specifications. The compiler will Perform floating-point operations in strict conformance with the IEEE 754 standard. Some optimizations are disabled when this option is specified
Do not allow optimizations for floating-point arithmetic that assume that arguments and results are not NaNs or +-Infs. The option instructs the compiler to follow exact implementation of IEEE or ISO rules/specifications for math functions.
Definition of this macro indicates that compilation for parallel operation is enabled, and that any OpenMP directives or pragmas will be visible to the compiler. The behavior of this macro is overridden if -DSPEC_SUPPRESS_OPENMP also appears in the list of compilation flags.
Enable handling of OpenMP directives and generate parallel code. The openmp library to be linked can be specified through -fopenmp=library option.
Switch to enable OpenMP.
Enable handling of OpenMP directives and generate parallel code. The openmp library to be linked can be specified through -fopenmp=library option.
Instructs the compiler to link with the OpenMP runtime libraries.
This option instructs the linker to link the executable with the pthread library.
This option instructs the linker to link the executable with libdl, the interface to the dynamic loader.
Instructs the compiler to link with system vector math libraries.
Instructs the compiler to link with AMD-supported optimized math library.
Use the jemalloc library, which is a general purpose malloc(3) implementation that emphasizes fragmentation avoidance and scalable concurrency support.
Instructs the compiler to link with flang Fortran runtime libraries.
Selects the C++ language dialect.
Generate output files in LLVM formats suitable for link time optimization. When used with -S this generates LLVM intermediate language assembly files, otherwise this generates LLVM bitcode format object files (which may be passed to the linker depending on the stage selection options).
This option enables an optimization that generates and calls specialized function versions when they are called with constant arguments. This optimization helps in function inlining.
This flag enables vectorization of loops with complex control flow that can not be vectorized by loop and slp vectorizers.
This option instructs the compiler to use vector math library.
This option eliminates the array computations based on their usage. The computations on unused array elements and computations on zero valued array elements are eliminated with this optimization. -flto as whole program analysis is required to perform this optimization.
Possible values:
Enables all the optimizations from -O3 along with other aggressive optimizations that may violate strict compliance with language standards. Refer to the AOCC options document for the language you're using for more detailed documentation of optimizations enabled under -Ofast.
Specify that Clang should generate code for a specific processor family member and later. For example, if you specify -march=znver1, the compiler is allowed to generate instructions that are valid on AMD Zen processors, but which may not exist on earlier products.
This option disables the generation of SSE4a instructions.
This option enables transformation of the layout of arrays of structure types and their fields to improve the cache locality. Aggressive analysis and transformations are performed at higher levels. This option is effective only with -flto as whole program analysis is required to perform this optimization.
Possible values:
This option avoids runtime memory dependency checks to enable aggressive vectorization.
This option enables an optimization that generates and calls specialized function versions when they are called with constant arguments. This optimization helps in function inlining.
This option enables the GVN hoist pass, which is used to hoist computations from branches.
Sets the limit at which loops will be unrolled. For example, if unroll threshold is set to 100 then only loops with 100 or fewer instructions will be unrolled.
This option enables an optimization that transforms the data layout of a single dimensional array to provide better cache locality by analysing the access patterns.
This option instructs the compiler to use vector math library.
This option eliminates the array computations based on their usage. The computations on unused array elements and computations on zero valued array elements are eliminated with this optimization. -flto as whole program analysis is required to perform this optimization.
Possible values:
This option enables an optimization that does the slp vectorization across basic blocks. The SLP vectorizer vectorizes instructions within basic blocks. The global slp vectorizer analyzes instructions across basic blocks and vectorizes them.
Sets the compiler's inlining threshold level to the value passed. The inline threshold is used in the inliner heuristics to decide which functions should be inlined.
This option enables an optimization that generates and calls specialized function versions when the loops inside function are vectorizable and the arguments are not aliased with each other. This optimization helps in function inlining and vectorization.
Sets the limit at which loops will be unrolled. For example, if unroll threshold is set to 100 then only loops with 100 or fewer instructions will be unrolled.
This optimization does partial unswitching of loops where some part of the unswitched control flow remains in the loop.
Sets the limit at which loops will be unswitched. For example, if unswitch threshold is set to 100 then only loops with 100 or fewer instructions will be unswtched.
Like -O2, except that it enables optimizations that take longer to perform or that may generate larger code (in an attempt to make the program run faster).
If multiple "O" options are used, with or without level numbers, the last such option is the one that is effective.
This option instructs the compiler to unroll loops wherever possible.
Allocate local variables on the stack, thus allowing recursion. SAVEd, data-initialized, or namelist members are always allocated statically, regardless of the setting of this switch.
Instructs the compiler to conform to the IEEE-754 specifications. The compiler will Perform floating-point operations in strict conformance with the IEEE 754 standard. Some optimizations are disabled when this option is specified
Do not allow optimizations for floating-point arithmetic that assume that arguments and results are not NaNs or +-Infs. The option instructs the compiler to follow exact implementation of IEEE or ISO rules/specifications for math functions.
Definition of this macro indicates that compilation for parallel operation is enabled, and that any OpenMP directives or pragmas will be visible to the compiler. The behavior of this macro is overridden if -DSPEC_SUPPRESS_OPENMP also appears in the list of compilation flags.
Enable handling of OpenMP directives and generate parallel code. The openmp library to be linked can be specified through -fopenmp=library option.
Switch to enable OpenMP.
Enable handling of OpenMP directives and generate parallel code. The openmp library to be linked can be specified through -fopenmp=library option.
Instructs the compiler to link with the OpenMP runtime libraries.
This option instructs the linker to link the executable with the pthread library.
This option instructs the linker to link the executable with libdl, the interface to the dynamic loader.
Instructs the compiler to link with system vector math libraries.
Instructs the compiler to link with AMD-supported optimized math library.
Use the jemalloc library, which is a general purpose malloc(3) implementation that emphasizes fragmentation avoidance and scalable concurrency support.
Instructs the compiler to link with flang Fortran runtime libraries.
Do not warn about functions defined with a return type that defaults to "int" or which return something other than what they were declared to.
Do not warn about functions defined with a return type that defaults to "int" or which return something other than what they were declared to.
Do not warn about functions defined with a return type that defaults to "int" or which return something other than what they were declared to.
Do not warn about functions defined with a return type that defaults to "int" or which return something other than what they were declared to.
Do not warn about functions defined with a return type that defaults to "int" or which return something other than what they were declared to.
Do not warn about functions defined with a return type that defaults to "int" or which return something other than what they were declared to.
Do not warn about functions defined with a return type that defaults to "int" or which return something other than what they were declared to.
Do not warn about functions defined with a return type that defaults to "int" or which return something other than what they were declared to.
This section contains descriptions of flags that were included implicitly by other flags, but which do not have a permanent home at SPEC.
Somewhere between -O0 and -O2.
If multiple "O" options are used, with or without level numbers, the last such option is the one that is effective.
Using numactl
to bind processes and memory to cores
For multi-copy runs or single copy runs on systems with multiple sockets, it is advantageous to bind a process to a
particular core. Otherwise, the OS may arbitrarily move your process from one core to another. This can affect
performance. To help, SPEC allows the use of a "submit" command where users can specify a utility to use to bind
processes. We have found the utility 'numactl
' to be the best choice.
numactl
runs processes with a specific NUMA scheduling or memory placement policy. The policy is set for a
command and inherited by all of its children. The numactl
flag "--physcpubind
" specifies
which core(s) to bind the process. "-l
" instructs numactl
to keep a process's memory on the
local node while "-m
" specifies which node(s) to place a process's memory. For full details on using
numactl
, please refer to your Linux documentation, 'man numactl
'
Note that some older versions of numactl
incorrectly interpret application arguments as its own. For
example, with the command "numactl --physcpubind=0 -l a.out -m a
", numactl
will interpret
a.out
's "-m
" option as its own "-m
" option. To work around this problem, we put
the command to be run in a shell script and then run the shell script using numactl
. For example:
"echo 'a.out -m a' > run.sh ; numactl --physcpubind=0 bash run.sh
"
Transparent Huge Pages (THP)
THP is an abstraction layer that automates most aspects of creating, managing, and using huge pages. THP is designed to hide much of the complexity in using huge pages from system administrators and developers, as normal huge pages must be assigned at boot time, can be difficult to manage manually, and often require significant changes to code in order to be used effectively. Most recent Linux OS releases have THP enabled by default.
Linux Huge Page settings
If you need finer control you can manually set huge pages using the following steps:
mkdir /mnt/hugepages
mount -t hugetlbfs nodev /mnt/hugepages
vm/nr_hugepages=N
in /etc/sysctl.conf
where N is the maximum number of pages the
system may allocate.Note that further information about huge pages may be found in the Linux kernel documentation file
hugetlbpage.txt
.
ulimit -s <n>
Sets the stack size to n kbytes, or unlimited to allow the stack size to grow without limit.
ulimit -l <n>
Sets the maximum size of memory that may be locked into physical memory.
powersave -f
(on SuSE)
Makes the powersave daemon set the CPUs to the highest supported frequency.
/etc/init.d/cpuspeed stop
(on Red Hat)
Disables the cpu frequency scaling program in order to set the CPUs to the highest supported frequency.
LD_LIBRARY_PATH
An environment variable that indicates the location in the filesystem of bundled libraries to use when running the benchmark binaries.
kernel/randomize_va_space
This option can be used to select the type of process address space randomization that is used in the system, for architectures that support this feature.
norandmaps
" parameter.CONFIG_COMPAT_BRK
option is enabled.CONFIG_COMPAT_BRK
is
disabled.MALLOC_CONF
An environment variable set to tune the jemalloc allocation strategy during the execution of the binaries. This environment variable setting is not needed when building the binaries on the system under test.
OS Tuning
ulimit:
Used to set user limits of system-wide resources. Provides control over resources available to the shell and processes started by it. Some common ulimit commands may include:
Performance Governors (Linux):
In-kernel CPU frequency governors are pre-configured power schemes for the CPU. The CPUfreq governors use P-states to change frequencies and lower power consumption. The dynamic governors can switch between CPU frequencies, based on CPU utilization to allow for power savings while not sacrificing performance.
To set the governor, use the following commmand: "cpupower frequency-set -r -g {desired_governor}"
Disabling Linux services:
Certain Linux services may be disabled to minimize tasks that may consume CPU cycles.
irqbalance:
Disabled through "service irqbalance stop". Depending on the workload involved, the irqbalance service reassigns various IRQ's to system CPUs. Though this service might help in some situations, disabling it can also help environments which need to minimize or eliminate latency to more quickly respond to events.
Tuning Kernel parameters:
The following Linux Kernel parameters were tuned to better optimize performance of some areas of the system:
tuned-adm:
The tuned-adm tool is a commandline interface for switching between different tuning profiles available to the tuned tuning daeomn available in supported Linux distros. The default configuration file is located in /etc/tuned.conf and the supported profiles can be found in /etc/tune-profiles.
Some profiles that may be available by default include: default, desktop-powersave, server-powersave, laptop-ac-powersave, laptop-battery-powersave, spindown-disk, throughput-performance, latency-performance, enterprise-storage
To set a profile, one can issue the command "tuned-adm profile (profile_name)". Here are details about relevant profiles.
Transparent Huge Pages (THP):
THP is an abstraction layer that automates most aspects of creating, managing, and using huge pages. THP is designed to hide much of the complexity in using huge pages from system administrators and developers, as normal huge pages must be assigned at boot time, can be difficult to manage manually, and often require significant changes to code in order to be used effectively. Transparent Hugepages increase the memory page size from 4 kilobytes to 2 megabytes. Transparent Hugepages provide significant performance advantages on systems with highly contended resources and large memory workloads. If memory utilization is too high or memory is badly fragmented which prevents hugepages being allocated, the kernel will assign smaller 4k pages instead. Most recent Linux OS releases have THP enabled by default.
Linux Huge Page settings:
If you need finer control and manually set the Huge Pages you can follow the below steps:
Note that further information about huge pages may be found in your Linux documentation file: /usr/src/linux/Documentation/vm/hugetlbpage.txt
Environment Variables:
GOMP_CPU_AFFINITY:
Used to bind threads to specific CPUs. The variable should contain a space-separated or comma-separated list of CPUs. This list may contain different kinds of entries: either single CPU numbers in any order, a range of CPUs (M-N) or a range with some stride (M-N:S). CPU numbers are zero based. For example, GOMP_CPU_AFFINITY="0 3 1-2 4-15:2" will bind the initial thread to CPU 0, the second to CPU 3, the third to CPU 1, the fourth to CPU 2, the fifth to CPU 4, the sixth through tenth to CPUs 6, 8, 10, 12, and 14 respectively and then start assigning back from the beginning of the list. GOMP_CPU_AFFINITY=0 binds all threads to CPU 0. There is no libgomp library routine to determine whether a CPU affinity specification is in effect. As a workaround, language-specific library functions, e.g., getenv in C or GET_ENVIRONMENT_VARIABLE in Fortran, may be used to query the setting of the GOMP_CPU_AFFINITY environment variable. A defined CPU affinity on startup cannot be changed or disabled during the runtime of the application. If both GOMP_CPU_AFFINITY and OMP_PROC_BIND are set, OMP_PROC_BIND has a higher precedence. If neither has been set and OMP_PROC_BIND is unset, or when OMP_PROC_BIND is set to FALSE, the host system will handle the assignment of threads to CPUs.
OMP_DYNAMIC:
Dynamic adjustment of threads. Enable or disable the dynamic adjustment of the number of threads within a team. The value of this environment variable shall be TRUE or FALSE. If undefined, dynamic adjustment is disabled by default.
OMP_SCHEDULE:
How threads are scheduled. Allows to specify schedule type and chunk size. The value of the variable shall have the form: type[,chunk] where type is one of static, dynamic or guided. The optional chunk size shall be a positive integer. If undefined, dynamic scheduling and a chunk size of 1 is used.
OMP_THREAD_LIMIT:
Set the maximum number of threads. Specifies the number of threads to use for the whole program. The value of this variable shall be a positive integer. If undefined, the number of threads is not limited.
MALLOC_CONF:
This environment variable affects the execution of the allocation functions. If the environment variable MALLOC_CONF is set, the characters it contains will be interpreted as options.
Firmware Settings
One or more of the following settings may have been set. If so, the "Platform Notes" section of the report will say so; and you can read below to find out more about what these settings mean.
AMD SMT Option (Default = Enabled):
This feature allows enabling or disabling of logical processor cores on processors supporting AMD SMT. When enabled, each physical processor core operates as two logical processor cores. When disabled, each physical core operates as only one logical processor core. Enabling this option can improve overall performance for applications that benefit from a higher processor core count.
Thermal Configuration (Default = Optimal Cooling):
This feature allows the user to select the fan cooling solution for the system. Values for this BIOS option can be:
Determinism Control (Default = Auto):
This option allows the user to choose between an Auto and Manual mode for Determinism Control. Values for this BIOS option can be:
Performance Determinism (Default = Performance Deterministic):
This option allows the user to configure the AMD processor Determinism setting for AGESA ("AMD Generic Encapsulated Software Architecture", a bootstrap protocol by which system devices on AMD64-architecture mainboards are initialized) control or BIOS control. Values for this BIOS option can be:
Package Power Limit Control Mode (Default = Auto):
This is a per Processor Power Limit value applicable for all populated processors in the system. This can be set to limit the processor power to a certain value. Values for this BIOS option can be:
Memory Patrol Scrubbing (Default = Enabled):
This option allows for correction of soft memory errors. Over the length of system runtime, the risk of producing multi-bit and uncorrected errors is reduced with this option. Values for this BIOS setting can be:
Processor Power and Utilization Monitoring (Default = Enabled):
This BIOS option allows the enabling/disabling of iLo Processor State Mode Switching and Insight Power Management Processor Utilization Monitoring.
When set to disabled, the system will also set the Power Regulator mode to Static High Performance mode and the HP Power Profile mode to Custom. This option may be useful in some environments that require absolute minimum latency.
Workload Profile (Default = General Power Efficient Compute):
This option allows a user to choose a workload profile that best fits the user`s needs. The workload profiles control many power and performance settings that are relevant to general workload areas. Values for this BIOS option can be:
Minimum Processor Idle Power Core C-State (Default = C6 State):
This option can only be configured if the Workload Profile is set to Custom, or this option is not a dependent value for the Workload Profile. This feature selects the processor's lowest idle power state (C-state) that the operating system uses. The higher the C-state, the lower the power usage of that idle state (C6 is the lowest power idle state supported by the processor). Values for this setting can be:
Memory Patrol Scrubbing (Default = Enabled):
This option allows for correction of soft memory errors. Over the length of system runtime, the risk of producing multi-bit and uncorrected errors is reduced with this option. Values for this BIOS setting can be:
NUMA memory domains per socket (Default = Auto):
This option allows the user to divide the memory domains that each socket has into a certain number of NUMA memory domains for better memory bandwidth. Values for this BIOS setting can be:
Last-Level Cache (LLC) as NUMA Node (Default = Disabled):
WHen enabled, this option allows the user to divide processor's cores into additional NUMA Nodes based on the L3 cache. Enabling this feature can increase performance for workloads that are NUMA aware and optimized. Values for this BIOS setting can be:
Last updated July 23, 2019.
Flag description origin markings:
For questions about the meanings of these flags, please contact the tester.
For other inquiries, please contact info@spec.org
Copyright 2017-2019 Standard Performance Evaluation Corporation
Tested with SPEC CPU2017 v1.0.5.
Report generated on 2019-11-13 10:03:33 by SPEC CPU2017 flags formatter v5178.