SPEC MPI2007 Flag Description for the Intel(R) C++ Compiler 14.0 for IA32 and Intel 64 applications and Intel(R) Fortran Compiler 14.0 for IA32 and Intel 64 applications

Copyright © 2013 Intel Corporation. All Rights Reserved.

Sections

Selecting one of the following will take you directly to that section:


Optimization Flags


Portability Flags


Compiler Flags


System and Other Tuning Information

MPI options and environment variables

Job startup command flags

-n <# of processes> or -np <# of processes>

Use this option to set the number of MPI processes to run the current arg-set.

-perhost <# of processes> or -ppn <# of processes>

Use this option to place the indicated number of consecutive MPI processes on every host in group round robin fashion. The number of processes to start is controlled by the option -n as usual.

-genv <ENVVAR> <value>

Use this option to set the <ENVVAR> environment variable to the specified <value> for all MPI processes.

Environment variables

I_MPI_FABRICS=<fabric>|<intra-node fabric>:<inter-node fabric>

Select the particular network fabric to be used.

tmi - Tag Matching Interface (TMI)-capable network fabrics, such as Intel True Scale Fabric and Myrinet* (through TMI).

shm - Shared-memory only

dapl - Direct Access Programming Library* (DAPL)-capable network fabrics, such as InfiniBand* and iWarp* (through DAPL)

ofi - OpenFabrics Interfaces* (OFI)-capable network fabrics, such as Intel True Scale Fabric and Ethernet (through OFI API).

I_MPI_COMPATIBILITY=<value>

Available values:

3 - The Intel MPI Library 3.x compatible mode

4 - The Intel MPI Library 4.0.x compatible mode

Set this environment variable to choose the Intel MPI Library runtime compatible mode. By default, the library complies with the MPI-3.1 standard.

I_MPI_HYDRA_PMI_CONNECT=<value>

Available values:

nocache - Do not cache PMI messages

cache - Cache PMI messages on the local pmi_proxy management processes to minimize the number of PMI requests. Cached information is automatically propagated to child management processes

lazy-cache - cache mode with on-demand propagation. This is the default value.

alltoall - Information is automatically exchanged between all pmi_proxy before any get request can bedone.

Define the processing method for PMI messages.

FI_PSM2_INJECT_SIZE=<value>

Maximum message size allowed for fi_inject and fi_tinject calls (default: 64)

FI_PSM2_LAZY_CONN=0|1

Control when connections are established between PSM2 endpoints that OFI endpoints are built on top of. When set to 0, connections are established when addresses are inserted into the address vector. This is the eager connection mode. When set to 1, connections are established when addresses are used the first time in communication. This is the lazy connection mode.

I_MPI_PIN_DOMAIN=<mc-shape>

Control process pinning for MPI applications. This environment variable is used to define a number of non-overlapping subsets (domains) of logical processors on a node, and a set of rules on how MPI processes are bound to these domains by the following formula: one MPI process per one domain. The core option means that domain consists of the logical processors that share a particular core. The number of domains on a node is equal to the number of cores on the node.

I_MPI_PIN_ORDER=<value>

This environment variable defines the mapping order for MPI processes to domains as specified by the I_MPI_PIN_DOMAIN environment variable. The bunch option means that the processes are mapped proportionally to sockets and the domains are ordered as close as possible on the sockets

FI_PSM2_DELAY=<value>

Time (seconds) to sleep before closing PSM endpoints. This is a workaround for a bug in some versions of PSM library. The default setting is 1.