PGI Server Complete Version 7.2 and PathScale Compiler Suite Version 3.2 compilers. Optimization, Compiler, and Other flags for use by SPEC CPU2006

Sections

Selecting one of the following will take you directly to that section:


Optimization Flags


Portability Flags


Compiler Flags


Other Flags


System and Other Tuning Information

Platform settings

One or more of the following settings may have been set. If so, the "Platform Notes" section of the report will say so; and you can read below to find out more about what these settings mean.

Power Regulator for ProLiant support (Default=HP Dynamic Power Savings Mode)

Values for this BIOS setting can be:

Node Interleaving Enabled (Default = Disabled):

This BIOS option allows the enabling/disabling of memory interleaving across CPU nodes. When disabled, each CPU chip can only access memory within its own node.

submit = echo "$command" > run.sh ; $BIND bash run.sh

When running multiple copies of benchmarks, the SPEC config file feature submit is sometimes used to cause individual jobs to be bound to specific processors. This specific submit command is used for Linux. The description of the elements of the command are:

Linux Huge Page settings

In order to take full advantage of using PGI's huge page runtime library, your system must be configured to use huge pages. It is safe to run binaries compiled with "-Msmartalloc=huge" on systems not configured to use huge pages, however, you will not benefit from the performance improvements huge pages offer. To configure your system for huge pages perform the following steps:

Note that further information about huge pages may be found in your Linux documentation file: /usr/src/linux/Documentation/vm/hugetlbpage.txt

PGI_HUGE_PAGES

The maximum number of huge pages an application is allowed to use can be set at run time via the environment variable PGI_HUGE_PAGES. If not set, then the process may use all available huge pages when compiled with "-Msmartalloc=huge" or a maximum of n pages where the value of n is set via the compile time flag "-Msmartalloc=huge:n.

Using numactl to bind processes and memory to cores

For multi-copy runs or single copy runs on systems with multiple sockets, it is advantageous to bind a process to a particular core. Otherwise, the OS may arbitrarily move your process from one core to another. This can effect performance. To help, SPEC allows the use of a "submit" command where users can specify a utility to use to bind processes. We have found the utility 'numactl' to be the best choice.

numactl runs processes with a specific NUMA scheduling or memory placement policy. The policy is set for a command and inherited by all of its children. The numactl flag "--physcpubind" specifies which core(s) to bind the process. "-l" instructs numactl to keep a process memory on the local node while "-m" specifies which node(s) to place a process memory. For full details on using numactl, please refer to your Linux documentation, 'man numactl'

Note that some versions of numactl, particularly the version found on SLES 10, we have found that the utility incorrectly interprets application arguments as it's own. For example, with the command "numactl --physcpubind=0 -l a.out -m a", numactl will interpret a.out's "-m" option as it's own "-m" option. To work around this problem, a user can put the command to be run in a shell script and then run the shell script using numactl. For example: "echo 'a.out -m a' > run.sh ; numactl --physcpubind=0 bash run.sh"

ulimit -s <n>

Sets the stack size to n kbytes, or unlimited to allow the stack size to grow without limit.

ulimit -l <n>

Sets the maximum size of memory that may be locked into physical memory.

NCPUS

Sets the maximum number of OpenMP parallel threads auto-parallelized (-Mconcur) applications may use.