Using SPEChpc™ 2021: The 'runhpc' Command

$Id$ Latest:

1. Basics

1.1 Defaults

1.2 Syntax

1.3 Benchmarks and suites

1.4 Run order

1.5 Disk Usage

1.5.1 Directory tree

1.5.2 Hey! Where did all my disk space go?

1.6 Multi-user support and limitations
expid (partial solution) output_root (recommended)

1.7 Actions: build buildsetup report run runsetup setup validate
cleaning: clean clobber onlyrun realclean scrub trash
(alternative: Clean by hand)

2. Commonly used options

--action --check_version --config --flagsurl --help --ignore_errors --iterations --loose --output_format --pmodel New --ranks --rawformat --rebuild --reportable --threads --tune

3. Less common options

--baseonly --basepeak --nobuild --comment --define --delay --deletework --expid --fake --fakereport --fakereportable --[no]graph_auto --graph_max --graph_min --http_proxy --http_timeout --info_wrap_column --keeptmp --labelNew --log_timestamp --make_no_clobber --notes_wrap_column --output_root --preenv --reportonly --review --[no]setprocgroup --size --[no]table --test --undef --update --use_submit_for_compare --use_submit_for_speed --username --verbose --version

4 Removed/unsupported options

4.1 Not used with SPEChpc: --rate --speed --[no]feedback --parallel_setup

--copies --[no]power --parallel_test --parallel_test_workloads

4.2 Feature removed: --machine --maxcompares

4.3 Unsupported: --make_bundle --unpack_bundle --use_bundle

5 Quick reference

1. Basics

What is runhpc?   runhpc is the primary tool for SPEChpc 2021. You use it from a Linux shell command line to build and run benchmarks, with commands such as these:

runhpc --config=eniac.cfg    --action=build 605.lbm_s
runhpc --config=colossus.cfg --threads=16   528.pot3d_t
runhpc --config=z3.cfg       --ranks=64    tiny 

The first command compiles the benchmark named 605.lbm_s. The second runs the benchmark 528.pot3d_s using 16 threads. The third runs 64 ranks of all the SPEChpc Tiny benchmarks.

New with SPEChpc: The former runspec utility is renamed runhpc in SPEChpc.

Before reading this document: If you have not already done so, please install and test your SPEChpc 2021 distribution. This document assumes that you have already:

If you have not done the above, please see the brief instructions in the Quick Start guide, or the more detailed section "Testing Your Installation" Linux.

1.1 Defaults

The SPEChpc default settings described in this document may be adjusted by config files.

The order of precedence for settings is:

Highest precedence: runhpc command
Middle: config file
Lowest: the tools as shipped by SPEC

Therefore, when this document tells you that something is the default, bear in mind that your config file may have changed that setting. With luck, the author of the config file will tell you so.

1.2 Syntax

The syntax for the runhpc command is:

runhpc [options] [list of benchmarks to run]

Options are described in the following sections. There, you will notice that many options have both long and short names. The long names are invoked using two dashes, and the short names use only a single dash. For long names that take a parameter, you can optionally use an equals sign. Long names can also be abbreviated, provided that you still enter enough letters for uniqueness. For example, the following commands all do the same thing:

runhpc --config=dianne_july25a --debug=99 small
runhpc --config dianne_july25a --debug 99 small
runhpc --conf dianne_july25a   --deb 99   small
runhpc -c dianne_july25a       -v 99      small

1.3 Benchmarks and Suites

In the list of benchmarks to run, you can use one or more individual benchmarks, such as 605.lbm_s, or you can run entire suites, using one of the Short Tags below.

Suite Contents Metrics How many ranks?
What do Higher Scores Mean?
Tiny SPEChpc 2021 Tiny Workload 9 benchmarks SPEChpc 2021_tny_base
SPEChpc 2021_tny_peak
The Tiny workloads use up to 60GB of memory and are intended for use on a single node using between 1 and 256 ranks. More nodes and ranks may be used however higher rank counts may see lower scaling as MPI communication becomes more dominant.
Higher scores indicate that less time is needed.
Small SPEChpc 2021 Small Workload 9 benchmarks SPEChpc 2021_sml_base
SPEChpc 2021_sml_peak
The Small workloads use up to 480GB of memory and are intended for use on one or more nodes using between 64 and 1024 ranks. More ranks may be used however higher rank counts may see lower scaling as MPI communication becomes more dominant.
Higher scores indicate that less time is needed.
Medium SPEChpc 2021 Medium Workload 6 benchmarks SPEChpc 2021_med_base
SPEChpc 2021_med_peak
The Medium workloads use up to 4TB of memory and are intended for use on a mid-size cluster using between 256 and 4096 ranks. More ranks may be used however higher rank counts may see lower scaling as MPI communication becomes more dominant.
Higher scores indicate that less time is needed.
Large SPEChpc 2021 Large Workload 6 benchmarks SPEChpc 2021_lrg_base
SPEChpc 2021_lrg_peak
The Large workloads use up to 14.5TB of memory and are intended for use on a larger cluster using between 2048 and 32,768 ranks. More ranks may be used however higher rank counts may see lower scaling as MPI communication becomes more dominant.
Higher scores indicate that less time is needed.
The "Short Tag" is the canonical abbreviation for use with runhpc, where context is defined by the tools. In a published document, context may not be clear.
To avoid ambiguity in published documents, the Suite Name or the Metrics should be spelled as shown above.

Benchmark names: Individual benchmarks can be named, numbered, or both.
Separate them with a space.
Names can be abbreviated, as long as you enter enough characters for uniqueness.
Each of the following commands does the same thing:

runhpc -c jason_july09d --noreportable 605.lbm_s 618.tealeaf_s 635.weather_s
runhpc -c jason_july09d --noreportable 605 618 635

To exclude a benchmark: Use a hat (^, also known as carat, typically found as shift-6). Note that if hat has significance to your shell, you may need to protect it from interpretation by the shell, for example by putting it in single quotes.

bash-n.n.n$ runhpc --noreportable -c cathy_sep14c small ^619
pickyShell% runhpc --noreportable -c cathy_sep14c small '^605' 

Turning off reportable: If your config file sets reportable=yes then you cannot run a subset unless you turn that option off.

[/usr/cathy/spechpc2021]$ runhpc --config cathy_apr21b --noreportable small ^lbm 

1.4 Run order

A reportable run does these steps:

  1. Ref: Run the tiny, small, medium, or large workload

  2. Report:

Summarizing reportable run order: The order can be summarized as:

  setup for ref
          ref1, ref2 [, ref3] (*)

 (*) One benchmark at a time.  Third run only if --iterations=3.

Reportable order when more than one tuning is present: If you run both base and peak tuning, base is always run first.

 setup for ref
          base ref1, base ref2 [, base ref3] (*)
          peak ref1, peak ref2 [, peak ref3] (*)

 (*) One benchmark at a time.  Third run only if --iterations=3.

Reportable order when more than one suite is present: If you start a reportable using more than one suite, all the work is done for one suite before proceeding to the next.

1.5 Disk Usage

1.5.1 Directory tree

The structure of the SPEChpc 2021 directory tree is:

$SPEC - the root directory
   benchspec    - Some suite-wide files
   HPC          - The benchmarks
   bin          - Tools to run and report on the suite
   config       - Config files
   Docs         - HTML documentation
   result       - Log files and reports
   tmp          - Temporary files
   tools        - Sources for the SPEChpc tools

Within each of the individual benchmarks, the structure is:

nnn.benchmark - root for this benchmark
   build      - Benchmark binaries are built here
      all     - Data used by all runs (if needed by the benchmark)
      ref     - The timed data set
      test    - Data for a simple test that an executable is functional
      train   - Data for feedback-directed optimization (not used in SPEChpc. Contains same workload as test)
   Docs       - Documentation for this benchmark
   exe        - Compiled versions of the benchmark
   run        - Benchmarks are run here
   Spec       - SPEC metadata about the benchmark
   src        - The sources for the benchmark

Tiny, Medium and Larger benchmarks (5nn.benchmark_t, 7nn.benchmark_m,8nn.benchmark_l) share content that is located under a corresponding Small benchmark (6nn.benchmark_s).

Look for the output of your runhpc command in the directory $SPEC/result. There, you will find log files and result files. More information about log files can be found in the Config Files document.

The format of the result files depends on what was selected in your config file, but will typically include at least .txt for ASCII text, and will always include .rsf, for raw (unformatted) run data. More information about result formats can be found below, under --output_format. Note that you can always re-generate the output, using the --rawformat option, also documented below.

1.5.2 Hey! Where did all my disk space go?

When you find yourself wondering "Where did all my disk space go?", the answer is usually "The run directories." Most activity takes place in automatically created subdirectories of $SPEC/benchspec/HPC/*/run/. Other consumers of disk space underneath individual nnn.benchmark directories include the build/ and exe/ directories.

At the top of the directory tree, space is used by your config/ and result/ directories, and for temporary directories


Usually, the largest amount of space is in the run directories.

If you use the config file label feature, then directories are named to try to make it easy for you to hunt them down, For example, suppose Holger has a config file that he is using to test some new memory optimizations using the Small workload with OpenMP. He has set


in his config file. In that case, the tools would create directories such as these:

$ pwd
$ ls -d */*Holger*

To get your disk space back, see the documentation of the various cleaning options, below.

1.6 Multi-user support

SPEChpc 2021 supports multiple users sharing an installation; however you must choose carefully regarding file protections. This section describes the multi-user features and protection options.

Features that are always enabled:

Limitations: The default methods impose two key limitations, which will not be safe in some environments:

  1. The directory tree must be writable by each of the users, which means that they have to trust each other not to modify or delete each others' files.
  2. Directories such as result/ and nnn.benchmark/exe/ and nnn.benchmark/run/ are not segregated by user. Therefore you can have only one version of (for example) 605.lbm_s/exe/lbm_base.Ofast and different users will have their result logs intermixed with each others in the result/ directory.

Partial solution(?) expid+conventions:
You can deal with limitation #2 if users adopt certain habits. For example, Swen could name all his config files swen-something.cfg. He could use runhpc --expid=swen or the corresponding config file expid=swen to cause his results to be placed under $SPEC/result/swen and binaries under nnn.benchmark/exe/swen. Unfortunately, this alleged solution still requires that the tree be writeable by all users, and will not help Swen at all when Chris comes along and blithely does one of the alternate cleaning methods.

Solution(?) Give up:
You could just choose to spend the disk space to give each person their own tree. For SPEChpc 2021, this may increase disk space requirement by about 8 GB per user.

Recommended Solution: output_root. The recommended method uses 4 steps:

(1) Protect most of the SPEC tree read-only chmod -R ugo-w $SPEC
(2) Allow shared access to the config directory. chmod 1777 $SPEC/config
chmod u+w $SPEC/config/*cfg
(3) Keep your own config files cp config/assignment1.cfg config/sunita.cfg
(4) Use the --output_root switch or
add an output_root to your config file.
runhpc --output_root=~/hpc2021
output_root = /home/${username}/hpc2021

More detail:

  1. Most of the SPEChpc 2021 tree is shared, and can be protected read-only. For example, on a Unix system, you might set protections with:

    chmod -R ugo-w $SPEC
  2. The one exception is the config directory, $SPEC/config/ which needs to be a read/write directory shared by all the users, and config files must be writeable. On most Linux system, chmod 1777 is very useful: it lets anyone create files, which they own, control, and protect. (1777 is commonly used for /tmp for this very reason.)

    chmod 1777 $SPEC/config
    chmod u+w $SPEC/config/*cfg
  3. New in SPEChpc 2021: The SPEC installation tree may now be fully read-only including the config directory. However in this case, you must to provode the full path to the config file to runhpc since it can no longer be in the default location.
    Example: runhpc --config /path/to/config.cfg.

  4. Config files usually would not be shared between users. For example, students might create their own copies of a config file:

    Kevin enters:

    cd /cs403/hpc2021
    . ./shrc
    cd config
    cp assignment1.cfg kevin1.cfg
    chmod u+w kevin1.cfg
    runhpc --config=kevin1 --action=build 605.lbm_s 

    Bert enters:

    cd /cs403/hpc2021
    . ./shrc
    cd config
    cp assignment1.cfg bert1.cfg
    chmod u+w bert1.cfg
    runhpc --config=bert1 --action=build 605.lbm_s 
  5. Set output_root in the config files to change the destinations of the outputs. For example, if config files include (near the top):


    then these directories will be used for the above runhpc command:

    Kevins's directories
    build: /home/kevin/hpc2021/benchspec/HPC/605.lbm_s/build/build_base_feb27a.0001
    Logs:  /home/kevin/hpc2021/result
    build: /home/bert/hpc2021/benchspec/HPC/605.lbm_s/build/build_base_feb27a.0001
    Logs:  /home/bert/hpc2021/result

Navigation: Unix users can easily navigate an output_root tree using ogo

1.7 Actions

Most runhpc commands perform an action on a set of benchmarks.

(Exceptions: runhpc --rawformat or update.)

The default action is validate.
The actions are described in two tables below: first, actions that relate to building and running; and then actions regarding cleanup.

--action build Compile the benchmarks, using the config file specmake options.
--action buildsetup

Set up build directories for the benchmarks.
Copy the source files to the directory, and create the needed Makefiles.
Do not attempt to actually do the build.

This option may be useful when debugging a build: you can set up a directory and play with it as a private sandbox.

--action onlyrun

Run the benchmarks but do not verify that they got the correct answers.
You cannot use this option to report performance.

This option may be useful while applying SPEChpc 2021 for some other purpose, such as tracing instructions for a hardware simulator, or generating a system load while debugging an operating system feature.

--action report Synonym for --fakereport; see also --fakereportable.
--action run Synonym for --action validate.
--action runsetup

Set up the run directory (or directories).
If executables do not exist, build them.
Copy executables and data to the directory(ies).
Create control file specccmds.cmd but do not actually run any benchmarks.

This option may be useful when debugging a run.
See the runsetup sandbox example in the Utilities documentation.

--action setup Synonym for --action runsetup
--action validate Build (if needed), set up directories, run, check for correct answers, generate reports.
This is the default action.

Cleaning actions are listed in order from least thorough to most:

--action clean

Empty run and build directories for the specified benchmark set for the current user. For example, if the current OS username is set to Jeffrey and this command is entered:

runhpc --action clean --config may12a small

then the tools will remove build and run directories with username Jeffrey for Small benchmarks generated by config file may12a.cfg.

--action clobber Clean + remove the corresponding executables.
--action trash Remove run and build directories for all users and all labels for the specified benchmarks.
--action realclean A synonym for --action trash
--action scrub Trash + remove the corresponding executables.
Caution Fake mode is not implemented for the cleaning actions.
For example, if you say runhpc --fake --action=clean the cleaning really happens.

Clean by hand:
If you prefer, you can clean disk space by entering commands such as the following:

rm -Rf $SPEC/benchspec/HPC/*/run
rm -Rf $SPEC/benchspec/HPC/*/build
rm -Rf $SPEC/benchspec/HPC/*/exe 

The above commands not only empty the contents of the run and exe directories; they also delete the directories themselves. That's fine; the tools will re-create the run and exe directories if they are needed again later on.

result directories can be cleaned or renamed. Don't worry about creating a new directory; runhpc will do so automatically. You should be careful to ensure no surprises for any currently-running users. If you move result directories, it is a good idea to also clean temporary directories at the same time.

cd $SPEC
mv result old-result
rm -Rf tmp/
cd output_root     # (If you use an output_root)
rm -Rf tmp/

I have so much disk space, I'll never use all of it:

Run directories are automatically re-used for subsequent runs. If you prefer, you can ask the tools to never touch a used run directory. Do this by setting the environment variable:


In this case, you should be prepared to do frequent cleaning, perhaps after reviewing the results of each run.

2 Commonly used options

Most users of runhpc will want to become familiar with the following options.

This section is organized alphabetically without regard to upper/lower case and without regard to presence or absence of no at the start of the switch.

--action action


--config name

--copies number

--flagsurl URL[,URL...]



--iterations number


--output_format format

Name|synonyms... Meaning
all implies all of the following except screen, check, and mail

config file used for this run, written as a numbered file in the result directory, for example, $SPEC/result/hpc2021_sml.005.small.ref.cfg

  1. The config file is saved on every run, as a compressed portion of the rawfile. Therefore, you can regenerate it later, if desired, using rawformat
  2. Results published by SPEC include your config file. Anyone can download it and try to reproduce your result.
  3. The config file printed by --output_format=config is not identical to the original:

    • The file name matches the other files for this result, not the name you had in your config/ directory.
    • It does not include protected comments
    • It includes a copy of the runhpc line that invoked it.
    • It tells you whether output_root was defined.
    • It includes any result edits you make after the run (see utility.html).
    • It does not include the HASH section.
Reportable syntax check (automatically enabled for reportable runs).
  • Causes the format of many fields to be checked, e.g. "Nov-2018", not "11/18" for hw_avail.
  • Consistent formats help readers, especially when searching.
  • check is included by default for reportable runs and when using --rawformat.
  • It can be disabled by adding nocheck to your list of formats.

Comma-separated variable. If you populate spreadsheets from your runs, you probably should not cut/paste data from text files; you'll get more accurate data by using --output_format csv. The csv report includes all runs, more decimal places, system information, and even the compiler flags.

implies HTML and text
Flag report. Will also be produced when formats that use it are requested (PDF, HTML).
web page
All generated reports will be sent to an address specified in the config file.
Portable Document Format. This format is the design center for SPEChpc 2021 reporting. Other formats contain less information: text lacks graphs, postscript lacks hyperlinks, and HTML is less structured. (PDF does not appear as part of "default" only because some systems may lack the ability to read it.)
The unformatted raw results, written to a numbered file in the result directory that ends with .rsf (e.g. /spec/hpc2021/result/hpc2021_sml.005.small.ref.rsf). Your raw result files are your most important, because the other formats are generated from them.
ASCII text output to stdout.
Plain ASCII text file

--pmodel model (New in SPEChpc)

--ranks N

--rawformat rawfiles



--threads N

--tune tuning

3. Less common options

This section is organized alphabetically without regard to upper/lower case and without regard to presence or absence of no at the start of the switch.


--basepeak [bench,bench,...]


--comment "comment text"

--define SYMBOL[=VALUE]

--delay secs


--expid subdir





--graph_max N

--graph_min N

--http_proxy proxy[:port]

--http_timeout N

--info_wrap_columns N


--label name



--notes_wrap_columns N

--output_root directory

--preenv, --nopreenv

--review, --noreview

--setprocgroup, --nosetprocgroup

--size size[,size...]

--table, --notable


--undef SYMBOL


--use_submit_for_compare (Not tested with SPEChpc)


--username name

--verbose n


4 Removed/unsupported options

4.1 Options that are no longer needed

Rate and Speed

The CPU2006 feature --rate[link goes to CPU2006]
and the CPU2006 feature --speed[link goes to CPU2006]
are not needed in SPEChpc 2021.

Parallel setup

The SPEC CPU2006 feature --parallel_setup[link goes to CPU2006]
and the CPU2006 feature --parallel_setup_prefork[link goes to CPU2006]
and the CPU2006 feature --parallel_setup_type[link goes to CPU2006]
are not used in SPEChpc 2021.

--[no]feedback (Not used with SPEChpc)

--power --nopower (Not available with SPEChpc)

--train_with WORKLOAD (Not available with SPEChpc)

--parallel_test processes (Not available with SPEChpc)

--parallel_test_workloads workload,... (Not available with SPEChpc)

4.2 Features removed

The SPEC CPU2006 feature --machine[link goes to CPU2006]
was removed because it was rarely used; and the additional complexity and confusion that it caused was deemed not worthwhile.

The CPU2006 feature --maxcompares[link goes to CPU2006]
was removed due to complexity considerations when implementing the new parallel setup methods.

4.3 Unsupported

The SPEC CPU2006 feature --make_bundle[link goes to CPU2006]
and the CPU2006 feature --unpack_bundle[link goes to CPU2006]
and the CPU2006 feature --use_bundle[link goes to CPU2006]
have not been tested in the SPEChpc 2021 environment.
It is not known whether anyone uses the features, and they were deemed not a priority for V1.
It is possible that you might be able to get them to work by following the CPU2006 instructions linked above, but no promises are made.

5 Quick reference

(This table is organized alphabetically, without regard to upper/lower case, and without regard to the presence of a leading "no").

-a Same as --action
--action action Do: build|buildsetup|clean|clobber| onlyrun|realclean|report|run|runsetup|scrub| setup|trash|validate
--basepeak Copy base results to peak (use with --rawformat)
--nobuild Do not attempt to build binaries
-c Same as --config
-C Same as --copies
--check_version Check whether an updated version of SPEChpc 2021 is available
--comment "text"Add a comment to the log and the stored configfile.
--config file Set config file for runhpc to use
--copies Set the number of copies for a SPECrate run
-D Same as --rebuild
-d Same as --deletework
--debug Same as --verbose
--define SYMBOL[=VALUE] Define a config preprocessor macro
--delay secs Add delay before and after benchmark invocation
--deletework Force work directories to be rebuilt
--dryrun Same as --fake
--dry-run Same as --fake
--expid=dir Experiment id, a subdirectory to use for results/runs/exe
-F Same as --flagsurl
--fake Show what commands would be executed.
--fakereport Generate a report without compiling codes or doing a run.
--fakereportable Generate a fake report as if "--reportable" were set.
--[no]feedback Control whether builds use feedback directed optimization
--flagsurl url Location (url or filespec) where to find your flags file
--graph_auto Let the tools pick minimum and maximum for the graph
--graph_min N Set the minimum for the graph
--graph_max N Set the maximum for the graph
-h Same as --help
--help Print usage message
--http_proxy Specify the proxy for internet access
--http_timeout Timeout when attempting http access
-I Same as --ignore_errors
-i Same as --size
--ignore_errors Continue with benchmark runs even if some fail
--ignoreerror Same as --ignore_errors
--info_wrap_column NSet wrap width for non-notes informational items
--infowrap Same as --info_wrap_column
--input Same as --size
--iterations N Run each benchmark N times
--keeptmp Keep temporary files
-L Same as --label
-l Same as --loose
--label label Set the label for executables, build directories, and run directories
--loose Do not produce a reportable result
--noloose Same as --reportable
-M Same as --make_no_clobber
--make_no_clobber Do not delete existing object files before building.
--mockup Same as --fakereportable
-n Same as --iterations
-N Same as --nobuild
--notes_wrap_column NSet wrap width for notes lines
-noteswrap Same as --notes_wrap_column
-o Same as --output_format
--output_format format[,format...] Generate: all|cfg|check|csv|flags|html|mail|pdf|ps|raw|screen|text
--output_root=dir Write all files here instead of under $SPEC
--parallel_test Number of test/train workloads to run in parallel
--pmodel Enable a node level parallel model
--[no]power Control power measurement during run
--preenv Allow environment settings in config file to be applied
-R Same as --rawformat
--ranks Set the number of MPI ranks to use.
--rawformat Format raw file
--rebuild Force a rebuild of binaries
--reportable Produce a reportable result
--noreportable Same as --loose
--reportonly Same as --fakereport
--[no]review Format results for review
-s Same as --reportable
-S SYMBOL[=VALUE] Same as --define
-S SYMBOL:VALUE Same as --define
--[no]setprocgroup [Don't] try to create all processes in one group.
--size size[,size...] Select data set(s): test|train|ref
--strict Same as --reportable
--nostrict Same as --loose
-T Same as --tune
--[no]table Do [not] include a detailed table of results
--threads=N Set number of host threads per MPI rank.
--test Run various perl validation tests on specperl
--train_with Change the training workload
--tune Set the tuning levels to one of: base|peak|all
--tuning Same as --tune
--undef SYMBOL Remove any definition of this config preprocessor macro
-U Same as --username
--update Check for updates to benchmark and example flag files, and config files
--username Name of user to tag as owner for run directories
--use_submit_for_compare If submit was used for the run, use it for comparisons too.
--use_submit_for_speed Use submit commands for SPECspeed (default is only for SPECrate).
-v Same as --verbose
--verbose Set verbosity level for messages to N
-V Same as --version
--version Output lots of version information
-? Same as --help

Using SPEChpc®2021: The 'runhpc' Command: Copyright © 2021 Standard Performance Evaluation Corporation (SPEC)