The OpenMPI C driver configured for use with the NVIDIA HPC C compiler (nvc).
The OpenMPI C++ driver configured for use with the NVIDIA HPC C++ compiler (nvc++).
The OpenMPI Fortran driver configured for use with the NVIDIA HPC Fortran compiler (nvfortran).
The OpenMPI C driver configured for use with the NVIDIA HPC C compiler (nvc).
The OpenMPI C++ driver configured for use with the NVIDIA HPC C++ compiler (nvc++).
The OpenMPI Fortran driver configured for use with the NVIDIA HPC Fortran compiler (nvfortran).
Chooses generally optimal flags for the target platform.
Enable OpenACC directives targeting NVIDIA GPUs
Instructs the compiler to use relaxed precision in the calculation of some intrinsic functions. Can result in improved performance at the expense of numerical accuracy.
The numerical method used when computing the residual iterations of a vectorized (SIMD) loop may be different than used in the vectorized loop. Using this option may lead for fast but less numerically consistent results.
Place automatic arrays on the stack.
Definition of this macro indicates that the MPI implementation supports accelerator device-to-device transfers. Used in conjuction when using OpenACC or OpenMP w/ target offload.
Chooses generally optimal flags for the target platform.
Enable OpenACC directives targeting NVIDIA GPUs
Instructs the compiler to use relaxed precision in the calculation of some intrinsic functions. Can result in improved performance at the expense of numerical accuracy.
The numerical method used when computing the residual iterations of a vectorized (SIMD) loop may be different than used in the vectorized loop. Using this option may lead for fast but less numerically consistent results.
Place automatic arrays on the stack.
Definition of this macro indicates that the MPI implementation supports accelerator device-to-device transfers. Used in conjuction when using OpenACC or OpenMP w/ target offload.
Definition of this macro indicates that the MPI implementation supports accelerator device-to-device transfers. Used in conjuction when using OpenACC or OpenMP w/ target offload.
Chooses generally optimal flags for the target platform.
Enable OpenACC directives targeting NVIDIA GPUs
Instructs the compiler to use relaxed precision in the calculation of some intrinsic functions. Can result in improved performance at the expense of numerical accuracy.
The numerical method used when computing the residual iterations of a vectorized (SIMD) loop may be different than used in the vectorized loop. Using this option may lead for fast but less numerically consistent results.
Place automatic arrays on the stack.
Chooses generally optimal flags for the target platform.
Enable OpenACC directives targeting NVIDIA GPUs
Instructs the compiler to use relaxed precision in the calculation of some intrinsic functions. Can result in improved performance at the expense of numerical accuracy.
The numerical method used when computing the residual iterations of a vectorized (SIMD) loop may be different than used in the vectorized loop. Using this option may lead for fast but less numerically consistent results.
Definition of this macro indicates that the MPI implementation supports accelerator device-to-device transfers. Used in conjuction when using OpenACC or OpenMP w/ target offload.
Chooses generally optimal flags for the target platform.
Enable OpenACC directives targeting NVIDIA GPUs
Instructs the C/C++ compiler to override data dependencies between pointers of a given storage class.
Definition of this macro indicates that the MPI implementation supports accelerator device-to-device transfers. Used in conjuction when using OpenACC or OpenMP w/ target offload.
Chooses generally optimal flags for the target platform.
Enable OpenACC directives targeting NVIDIA GPUs
Allocate host data directly in CPU physical (pinned) memory in place of using pinned memory buffers. Allocation cost may be higher, but using pinned memory data transfer is often faster. Useful with programs having few allocation but many data transfers between the host and device.
Chooses generally optimal flags for the target platform.
Enable OpenACC directives targeting NVIDIA GPUs
Staticily link with the NVIDIA runtime libraries. System libraries may still be dynamically linked.
Definition of this macro indicates that the MPI implementation supports accelerator device-to-device transfers. Used in conjuction when using OpenACC or OpenMP w/ target offload.
Chooses generally optimal flags for the target platform.
Enable OpenACC directives targeting NVIDIA GPUs
Instructs the compiler to use relaxed precision in the calculation of some intrinsic functions. Can result in improved performance at the expense of numerical accuracy.
The numerical method used when computing the residual iterations of a vectorized (SIMD) loop may be different than used in the vectorized loop. Using this option may lead for fast but less numerically consistent results.
Place automatic arrays on the stack.
Staticily link with the NVIDIA runtime libraries. System libraries may still be dynamically linked.
Definition of this macro indicates that the MPI implementation supports accelerator device-to-device transfers. Used in conjuction when using OpenACC or OpenMP w/ target offload.
Definition of this macro indicates that the MPI implementation supports accelerator device-to-device transfers. Used in conjuction when using OpenACC or OpenMP w/ target offload.
Chooses generally optimal flags for the target platform.
Enable OpenACC directives targeting NVIDIA GPUs
Instructs the compiler to use relaxed precision in the calculation of some intrinsic functions. Can result in improved performance at the expense of numerical accuracy.
The numerical method used when computing the residual iterations of a vectorized (SIMD) loop may be different than used in the vectorized loop. Using this option may lead for fast but less numerically consistent results.
Place automatic arrays on the stack.
Staticily link with the NVIDIA runtime libraries. System libraries may still be dynamically linked.
Disable warning messages.
Disable warning messages.
Disable warning messages.
Disable warning messages.
Disable warning messages.
Disable warning messages.
This section contains descriptions of flags that were included implicitly by other flags, but which do not have a permanent home at SPEC.
Level-two optimization (-O2 or -O) specifies global optimization. The -fast option generally will specify global optimization; however, the -fast switch will vary from release to release depending on a reasonable selection of switches for any one particular release. The -O or -O2 level performs all level-one local optimizations as well as global optimizations. Control flow analysis is applied and global registers are allocated for all functions and subroutines. Loop regions are given special consideration. This optimization level is a good choice when the program contains loops, the loops are short, and the structure of the code is regular.
The NVHPC compilers perform many different types of global optimizations, including but not limited to:
Level-one optimization specifies local optimization (-O1). The compiler performs scheduling of basic blocks as well as register allocation. This optimization level is a good choice when the code is very irregular; that is it contains many short statements containing IF statements and the program does not contain loops (DO or DO WHILE statements). For certain types of code, this optimization level may perform better than level-two (-O2) although this case rarely occurs.
The NVHPC compilers perform many different types of local optimizations, including but not limited to:
Instructs the compiler to completely unroll loops with a constant loop count of less than or equal to 1 where 1 is a supplied constant value. If no constant value is given, then a default of 4 is used. A value of 1 inhibits the complete unrolling of loops with constant loop counts.
Invokes the loop unroller.
Inline functions declared with the inline keyword.
Enables loop-carried redundancy elimination, an optimization that can reduce the number of arithmetic operations and memory references in loops.
Instructs the vectorizer to search for vectorizable loops and, where possible, make use of SSE, SSE2, and prefetch instructions.
Enable automatic vector pipelining.
Instructs the vectorizer to enable certain associativity conversions that can change the results of a computations due to roundoff error. A typical optimization is to change an arithmetic operation to an arithmetic opteration that is mathmatically correct, but can be computationally different, due to round-off error.
Instructs the vectorizer to generate alternate code for vectorized loops when appropriate. For each vectorized loop the compiler decides whether to generate altcode and what type or types to generate, which may be any or all of:
The compiler also determines suitable loop count and array alignment conditions for executing the altcode.
Align "unconstrained" data objects of size greater than or equal to 16 bytes on cache-line boundaries. An "unconstrained" object is a variable or array that is not a member of an aggregate structure or common block, is not allocatable, and is not an automatic array. On by default on 64-bit Linux systems.
Set SSE to flush-to-zero mode; if a floating-point underflow occurs, the value is set to zero.
Instructs the compiler to use relaxed precision in the calculation of floating-point reciprocal square root (1/sqrt). Can result in improved performance at the expense of numerical accuracy.
Instructs the compiler to use relaxed precision in the calculation of floating-point square root. Can result in improved performance at the expense of numerical accuracy.
Instructs the compiler to use relaxed precision in the calculation of floating-point division. Can result in improved performance at the expense of numerical accuracy.
Instructs the compiler to allow floating-point expression reordering, including factoring. Can result in improved performance at the expense of numerical accuracy.
Flag description origin markings:
For questions about the meanings of these flags, please contact the tester.
For other inquiries, please contact info@spec.org
Copyright 2021-2023 Standard Performance Evaluation Corporation
Tested with SPEC hpc2021 v1.0.3.
Report generated on 2023-03-27 12:20:51 by SPEC hpc2021 flags formatter v1.0.3 .