The 'runspec' Command

Last updated: $Date: 2017-02-07 11:36:33 -0500 (Tue, 07 Feb 2017) $ by $Author: BrianWhitney $

(To check for possible updates to this document, please see http://www.spec.org/accel/Docs/ )

Contents

1 Introduction

1.1 Who Needs runspec?

1.2 About Config Files

1.2.1 Finding a config file

1.2.2 Naming your config file

1.2.3 If you change only one thing...

1.3 About Defaults

1.4 About Disk Usage and Support for Multiple Users

1.4.1 Directory tree

1.4.2 Hey! Where did all my disk space go?

1.5 Multi-user support

1.5.1 Recommended sharing method: output_root

1.5.2 Alternative sharing methods

2 Before Using runspec

2.1 Install kit

2.2 Have a config file

2.3 Undefine SPEC

2.4 Set your path: Unix

2.5 Set your path: Windows

2.6 Check your disk space

3 Using runspec

3.1 Simplest usage

3.1.1 Reportable run

3.1.2 Running selected benchmarks

3.1.3 Output files

3.2 Syntax

3.2.1 Benchmark names in run lists

3.2.2 Run order for reportable runs

3.2.3 Run order when more than one tuning is present

3.3 Actions

3.4 Commonly used options

--action --check_version --config --device --flagsurl --help --ignore_errors --iterations --loose --output_format --platform --[no]power --rawformat --rebuild --reportable --tune

3.5 Less commonly used options

--basepeak --nobuild --comment --define --delay --deletework --extension --fake --fakereport --fakereportable --[no]feedback --[no]graph_auto --graph_max --graph_min --http_proxy --http_timeout --info_wrap_column --[no]keeptmp --log_timestamp --machine --make_no_clobber --make_bundle --notes_wrap_column --[no]preenv --reportonly --[no]review --[no]setprocgroup --size --test --[no]table --undef --update --update_flags --unpack_bundle --use_bundle --username --verbose --version

4 Quick reference

Note: links to SPEC ACCEL documents on this web page assume that you are reading the page from a directory that also contains the other SPEC ACCEL documents. If by some chance you are reading this web page from a location where the links do not work, try accessing the referenced documents at one of the following locations:

1 Introduction

1.1 Who Needs runspec?

Everyone who uses SPEC ACCEL needs runspec. It is the primary tool in the suite. It is used to build the benchmarks, run them, and report on their results. All users of ACCEL should read this document.

If you are a beginner, please start out by reading from the beginning through section 3.1 Simplest Usage. That will probably be enough to get you started.

1.2 About Config Files

In order to use runspec, you need a "config file", which contains detailed instructions about how to build and run the benchmarks. You may not have to learn much about config files in order to get started. Typically, you start off using a config file that someone else has previously written.

1.2.1 Finding a config file

Where can you find such a config file? There are various sources:

  1. Look in the directory $SPEC/config/ (Unix) or %SPEC%\config\ (Windows). You may find that there is a already a config file there with a name that indicates that it is appropriate for your system. You may even find that default.cfg already contains settings that would be a good starting place for your system.

  2. Look at the SPEC web site (http://www.spec.org/accel/) for a ACCEL result submission that used your system, or a system similar to yours. You can download the config file from that submission.

  3. You can review the examples in
    $SPEC/config/Example*.cfg (Unix) or
    %SPEC%\config\Example*.cfg (Windows).
  4. Alternatively, you can write your own, using the instructions in config.html

1.2.2 Naming your config file

Once you have found a config file that you would like to use as a starting point, you will probably find it convenient to copy it and modify it according to your needs. There are various options:

1.2.3 If you change only one thing...

At first, you may hesitate to change settings in config files, until you have a chance to read config.html. But there is one thing that you might want to change right away. Look for a line that says:

ext=

That line determines what extension will be added to your binaries. If there are comments next to that line giving instructions ("# set ext to A for this, or to B for that"), then set it accordingly. But if there are no such instructions, then usually you are free to set the extension to whatever you like, which can be very useful to ensure that your binaries are not accidentally over-written. You might add your name in the extension, if you are sharing a testbed with others. Or, you may find it convenient to keep binaries for a series of experiments, to facilitate later analysis; if you're naming your config files with names such as feb14a.cfg, you might choose to use "ext=feb14a" in the config file.

1.3 About Defaults

The SPEC tools have followed two principles regarding defaults:

  1. There should always be a default for everything
  2. It should be easy to change the defaults

This means (the good news) that something sensible will usually happen, even when you are not explicit about what you want. But it also means (the bad news) that if something unexpected happens, you may have to look in several places in order to figure out why it behaves differently than you expect.

The order of precedence for settings is:

Highest precedence: runspec command
Middle: config file
Lowest: the tools as shipped by SPEC

Therefore, when this document tells you that something is the default, bear in mind that your config file may have changed that setting. With luck, the author of the config file will tell you so (perhaps in the comments to the config file).

1.4 About Disk Usage

1.4.1 Directory Tree

The structure of the ACCEL directory tree is:

$SPEC or %SPEC% - the root directory
   Docs         - location for html documents concerning the benchmark suite
   Docs.txt     - location for text versions of the Docs tree
   benchspec    - some suite-wide files
      ACCEL     - the benchmarks
   bin          - tools to run and report on the suite
   config       - config files
   result       - log files and reports
   tools        - sources for the ACCEL tools

Within each of the individual benchmarks, the structure is:

nnn.benchmark - root for this benchmark
   Docs       - Documentation about the benchmark
   Spec       - SPEC metadata about the benchmark
   data        
      all     - data used by all runs (if needed by the benchmark)
      ref     - the real data set, required for all result reporting
      test    - data for a simple test that an executable is functional
      train   - data for feedback-directed optimization
   build      - all builds take place here
   exe        - compiled versions of the benchmark
   run        - all runs take place here
   src        - the sources for the benchmark

1.4.2 Hey! Where did all my disk space go?

When you find yourself wondering "Where did all my disk space go?", the answer is "The run directories." Most (*) activity takes place in automatically created subdirectories of $SPEC/benchspec/ACCEL/*/run/ (Unix) or %SPEC%\benchspec\ACCEL\*\run\ (Windows).

For example, suppose Bob has a config file that he is using to test some new memory optimizations, and has set

ext=BobMemoryOpt

in his config file. In that case, the tools would create directories such as these:

$ pwd
/Users/bob/accel/benchspec/ACCEL/101.tpacf/run
$ ls
list
run_base_test_BobMemoryOpt.0001
run_base_train_BobMemoryOpt.0001
run_base_ref_BobMemoryOpt.0001
run_peak_test_BobMemoryOpt.0001
run_peak_train_BobMemoryOpt.0001
run_peak_ref_BobMemoryOpt.0001
$ 

To get your disk space back, see the documentation of the various cleaning options, below; or issue a command such as the following (on Unix systems; Windows users can select the files with Explorer):

rm -Rf $SPEC/benchspec/ACCEL/*/run/run*BobMemory*

The effect of the above command would be to delete all the run directories associated with the benchmarks which used extension *BobMemory*. Note that the command did not delete the directories where the benchmarks were built (...ACCEL/*/build/*); sometimes it can come in handy to keep the build directories, perhaps to assist with debugging.

(*) Other space: In addition to the run directories, other consumers of disk space include: (1) temporary files; for a listing of these, see the documentation of keeptmp; and (2) the build directories. For the example above, underneath:

/Users/bob/accel/benchspec/ACCEL/101.tpacf/build/

will be found:

$ ls
build_base_BobMemoryOpt.0001
build_peak_BobMemoryOpt.0001

1.5 Multi-user support

(If you are not sharing a SPEC ACCEL installation with other users, you can skip ahead to section 2.)

The SPEC ACCEL toolset provides support for multiple users of a single installation, but the tools also rely upon users to make appropriate choices regarding setup of operating-system file protections. This section describes the multi-user features and describes ways of organizing protections. First, to the features that are always enabled:

If you have more than one user of SPEC ACCEL, you can use additional features and choose from several different ways to organize the on-disk layout to share usage of the product. The recommended way is described first.

1.5.1 Recommended sharing method: output_root

The recommended method for sharing a SPEC ACCEL installation among multiple users has 4 steps:

StepExample (Unix)
Protect most of the SPEC tree read-only chmod -R ugo-w $SPEC
Allow shared access to the config directorychmod 1777 $SPEC/config
Keep your own config filescp config/assignment1.cfg config/alan1.cfg
Add an output_root to your config fileoutput_root=/home/${username}/spec

More detail about the steps is below.

  1. Most of the ACCEL tree is shared, and can be protected read-only. For example, on a Unix system, you might set protections with:

    chmod -R ugo-w $SPEC
    
  2. The one exception is the config directory, $SPEC/config/ (Unix) or %SPEC%\config\ (Windows), which needs to be a read/write directory shared by all the users. It is written to by users when they create config files, and by the tools themselves: config files are updated after successful builds to associate them with their binaries.

    On Unix, the above protection command needs to be supplemented with:

    chmod 1777 $SPEC/config
    

    which will have the effect (on most Unix systems) of allowing users to create config files which they can choose to protect to allow access only by themselves.

  3. Config files usually would not be shared between users. For example, students might create their own copies of a config file.

    Alan enters:

    cd /cs403
    . ./shrc
    cp config/assignment1.cfg config/alan1.cfg
    chmod u+w config/alan1.cfg
    runspec --config alan1 --action build 101.tpacf
    

    Wendy enters:

    cd /cs403
    . ./shrc
    cp config/assignment1.cfg config/wendy1.cfg
    chmod u+w config/wendy1.cfg
    runspec --config wendy1 --action build 101.tpacf
    
  4. Set output_root in the config files to change the destinations of the outputs.

    To see the effect of output_root, consider an example with and without the feature. If $SPEC is set to /cs403 and if ext=feb27a, then normally the build directory for 101.tpacf with base tuning would be:

    /cs403/benchspec/ACCEL/101.tpacf/run/build_base_feb27a.001

    But if the config files include (near the top, before any occurrence of a section marker):

    output_root=/home/${username}/spec
    ext=feb27a
    

    then Alan's build directory for 101.tpacf will be

    /home/alan/spec/benchspec/ACCEL/101.tpacf/run/build_base_feb27a.001

    and Wendy's will be

    /home/wendy/spec/benchspec/ACCEL/101.tpacf/run/build_base_feb27a.001

    With the above setting of output_root, log files and reports that would normally go to /cs403/result instead will go to /home/alan/spec/result and /home/wendy/spec/result. Alan will find tpacf executables underneath /home/alan/spec/benchspec/ACCEL/101.tpacf/exe. And so forth.

Summary: output_root is the recommended way to separate users. Set the protection on the original tree to read-only, except for the config directory, which should be set to allow users to write, and protect, their own config files.

1.5.2 Alternative sharing methods

An alternative is to keep all the files in a single directory tree. In this case:

Name convention: Users sharing a tree can adopt conventions to make their files more readily identifiable. As mentioned above, you can set your config file name to match your own name, and do the same for the extension.

Expid convention: Another alternative is to tag directories with labels that help to identify them based on an "experiment ID", with the config file feature expid, as described in config.html.

Spend the disk space: A final alternative, of course, is to not share. You can simply give each user their own copy of the entire SPEC ACCEL directory tree. This may be the easiest way to ensure that there are no surprises (at the expense of extra disk space.)


2 Before Using runspec

Before using runspec, you need to:

2.1 Install ACCEL

The runspec tool uses perl version 5.12.3, which is installed as specperl when you install ACCEL. If you haven't already installed the suite, then please see system-requirements.html followed by:

2.2 Find a config file

You won't get far unless you have a config file, but fortunately you can get started by using a pre-existing config file. See About Config Files, above.

2.3 Undefine SPEC

If the environment variable SPEC is already defined (e.g. from a run of some other SPEC benchmark suite), it may be wise to undefine it first, e.g. by logging out and logging in, or by using whatever commands your system uses for removing definitions (such as unset).

To check whether the variable is already defined, type

echo $SPEC (Unix) or
echo %SPEC% (Windows)

On Unix systems, the desired output is nothing at all; on Windows systems, the desired output is %SPEC%.

Similarly, if your PATH includes tools from some other SPEC suite, it may be wise to remove them from your path.

Next, you need to set your path appropriately for your system type.
See section 2.4 for Unix or section 2.5 for Windows.

2.4 Setting the path: Unix (and Mac OSX)

If you are using a Unix system, change your current directory to the top-level SPEC directory and source either shrc or cshrc:

Note that it is, in general, a good idea to ensure that you understand what is in your path, and that you have only what you truly need. If you have non-standard versions of commonly used utilities in your path, you may avoid unpleasant surprises by taking them out.

q. Do you have to be root? Occasionally, users of Unix systems have asked whether it is necessary to elevate privileges, or to become 'root', prior to entering the above command. SPEC recommends (*) that you do not become root, because: (1) To the best of SPEC's knowledge, no component of SPEC CPU needs to modify system directories, nor does any component need to call privileged system interfaces. (2) Therefore, if you find that it appears that there is some reason why you need to be root, the cause is likely to be outside the SPEC toolset - for example, disk protections, or quota limits. (3) For safe benchmarking, it is better to avoid being root, for the same reason that it is a good idea to wear seat belts in a car: accidents happen, humans make mistakes. For example, if you accidentally type:

kill 1

when you meant to say:

kill %1

then you will very grateful if you are not privileged at that moment.

(*) This is only a recommendation, not a requirement nor a rule.

2.5 Setting the path: Windows

If you are using a Microsoft Windows system, start a Command Prompt Window (previously known as an "MSDOS window"). Change to the directory where you have installed ACCEL, then edit shrc.bat, following the instructions contained therein. For example:

C:\> f:
F:\> cd diego\accel
F:\diego\accel\> copy shrc.bat shrc.bat.orig
F:\diego\accel\> notepad shrc.bat

and follow the instructions in shrc.bat to make the appropriate edits for your compiler paths.

Caution: you may find that the lines are not correctly formatted (the text appears to be all run together) when you edit this file. If so, see the section "Using Text Files on Windows" in the Windows installation guide.

You will have to uncomment one of two lines:

   rem set SHRC_COMPILER_PATH_SET=yes 

or

   rem set SHRC_PRECOMPILED=yes  

by removing "rem" from the beginning of the desired line.

If you uncomment the first line, you will have to follow the instructions a few lines further on, to set up the environment for your compiler.

If you uncomment the second line, you must have pre-compiled binaries for the benchmarks

Note that it is, in general, a good idea to ensure that you understand what is in your path, and that you have only what you truly need. If you have non-standard versions of commonly used utilities in your path, you may avoid unpleasant surprises by taking them out. In order to help you understand your path, shrc.bat will print it after it is done.

When you are done, set the path using your edited shrc.bat, for example:

F:\diego\accel> shrc

2.6 Make sure that you have enough disk space.

Presumably, you checked to be sure you had enough space when you read system-requirements.html, but now might be a good time to double check that you still have enough. Typically, you will want to have at least 2 GB free disk space at the start of a run. Windows users can say "dir", and will find the free space at the bottom of the directory listing. Unix users can say "df -k ." to get a measure of freespace in KB.

If you have done some runs, and you are wondering where your space has gone, see section 1.4.2.


3 Using runspec

3.1 Simplest usage

3.1.1 Reportable run

It is easiest to use runspec when:

In this lucky circumstance, all that needs to be done is to name the config file, select the benchmark suite name opencl and say --reportable to attempt a full run.

For example, suppose that Wilfried wants to give Ryan a config file and compiled binaries with some new optimizations for a Unix system. Wilfried might type something like this:

[/usr/wilfried]$ cd $SPEC
[/bigdisk/accel]$ spectar -cvf - be*/C*/*/exe/*newomp* config/newomp.cfg | specxz > newomp.tar.xz

and then Ryan might type something like this:

ryan% cd /usr/ryan/accel
accel% bash
bash-2.05$ . ./shrc
bash-2.05$ specxz -dc newomp.tar.xz | spectar -xf -
bash-2.05$ runspec --config newomp.cfg --nobuild --device 1 --platform intel --reportable opencl

In the example above, the --nobuild emphasizes that the tools should not attempt to build the binaries; instead, the prebuilt binaries should be used. If there is some reason why the tools don't like that idea (for example: the config file does not match the binaries), they will complain and refuse to run, but with --nobuild they won't go off and try to do a build.

As a another example, suppose that Reinhold has given Kaivalya a Windows config file with changes from 12 August, and Kaivalya wants to run the suite. He might say something like this:

F:\kaivalya\accel\> shrc
F:\kaivalya\accel\> specxz -dc reinhold_aug12a.tar.xz | spectar -xf -
F:\kaivalya\accel\> runspec --config reinhold_aug12a --platform intel --device 1 --reportable opencl

3.1.2 Running selected benchmarks

If you want to run a subset of the benchmarks, rather than running the whole suite, you can name them. Since a reportable run uses an entire suite, you will need to turn off reportable:

[/usr/mat/accel]$ runspec --config mat_dec25j.cfg --noreportable --platform intel --device 1 101.tpacf

3.1.3 Output files

Look for the output of your runspec command in the directory $SPEC/result (Unix) or %SPEC%\result (Windows). There, you will find log files and result files. More information about log files can be found in config.html.

The format of the result files depends on what was selected in your config file, but will typically include at least .txt for ASCII text, and will always include .rsf, for raw (unformatted) run data. More information about result formats can be found below, under --output_format. Note that you can always re-generate the output, using the --rawformat option, also documented below.

This concludes the section on simplest usage.
If simple commands such as the above are not enough to meet your needs, you can find out about commonly used options by continuing to read the next 3 sections (3.2, 3.3, and 3.4).

3.2 Syntax

The syntax for the runspec command is:

runspec [options] [list of benchmarks to run]

Options are described in the following sections. There, you will notice that many options have both long and short names. The long names are invoked using two dashes, and the short names use only a single dash. For long names that take a parameter, you can optionally use an equals sign. Long names can also be abbreviated, provided that you still enter enough letters for uniqueness. For example, the following commands all do the same thing:

runspec --config=dianne_july25a --debug=99 opencl 
runspec --config dianne_july25a --debug 99 opencl 
runspec --conf dianne_july25a   --deb 99   opencl 
runspec -c dianne_july25a       -v 99      opencl 

The list of benchmarks to run can be:

For a reportable run, you must specify opencl, openmp, or openacc.

3.2.1 Benchmark names in run lists

Individual benchmarks can be named, numbered, or both; and they can be abbreviated, as long as you enter enough characters for uniqueness. For example, each of the following commands does the same thing:

runspec -c jason_july09d --noreportable 101.tpacf 103.stencil
runspec -c jason_july09d --noreportable 101 103
runspec -c jason_july09d --noreportable tpacf stencil
runspec -c jason_july09d --noreportable tp st

It is also possible to exclude a benchmark, using a hat (^, also known as carat, typically found as shift-6). For example, suppose while you are debugging a validation error of benchmark 101.tpacf you want to check the performance of all other opencl benchmarks. You could run all of the benchmarks except 101.tpacf by entering a command such as this one:

runspec --noreportable -c kathy_sep14c opencl ^tpacf 

Note that if hat has significance to your shell, you may need to protect it from interpretation by the shell, for example by putting it in single quotes. On Windows, you will need to use both a hat and double quotes for each benchmark you want to exclude, like this:

E:\accel> runspec --noreportable -c cathy_apr21b opencl  "^tpacf"

3.2.2 Run order for reportable runs

A reportable run runs all the benchmarks in a suite with the test and train data sets as an additional verification that the benchmark binaries get correct results. The test and train workloads are not timed. Then, the reference workloads are run three times, so that median run time can be determined for each benchmark. For example, here are the runs for a reportable run of ACCEL:

$ grep runspec: *036.log
runspec: runspec -c mic -T base --device 1 --platform intel  --reportable opencl
$ grep Running *036.log
Running Benchmarks
  Running 001.systest test base cuda default [Tue Dec 10 19:40:32 2013]
  Running 101.tpacf test base cuda default [Tue Dec 10 19:40:33 2013]
  Running 103.stencil test base cuda default [Tue Dec 10 19:40:34 2013]
  Running 104.lbm test base cuda default [Tue Dec 10 19:40:53 2013]
  Running 110.fft test base cuda default [Tue Dec 10 19:42:01 2013]
  Running 112.spmv test base cuda default [Tue Dec 10 19:42:14 2013]
  Running 114.mriq test base cuda default [Tue Dec 10 19:42:20 2013]
  Running 116.histo test base cuda default [Tue Dec 10 19:42:22 2013]
  Running 117.bfs test base cuda default [Tue Dec 10 19:42:24 2013]
  Running 118.cutcp test base cuda default [Tue Dec 10 19:42:35 2013]
  Running 120.kmeans test base cuda default [Tue Dec 10 19:42:37 2013]
  Running 121.lavamd test base cuda default [Tue Dec 10 19:42:40 2013]
  Running 122.cfd test base cuda default [Tue Dec 10 19:42:41 2013]
  Running 123.nw test base cuda default [Tue Dec 10 19:42:50 2013]
  Running 124.hotspot test base cuda default [Tue Dec 10 19:42:51 2013]
  Running 125.lud test base cuda default [Tue Dec 10 19:42:57 2013]
  Running 126.ge test base cuda default [Tue Dec 10 19:42:58 2013]
  Running 127.srad test base cuda default [Tue Dec 10 19:43:00 2013]
  Running 128.heartwall test base cuda default [Tue Dec 10 19:43:01 2013]
  Running 140.bplustree test base cuda default [Tue Dec 10 19:43:04 2013]
Running Benchmarks
  Running 001.systest train base cuda default [Tue Dec 10 19:43:15 2013]
  Running 101.tpacf train base cuda default [Tue Dec 10 19:43:16 2013]
  Running 103.stencil train base cuda default [Tue Dec 10 19:43:17 2013]
  Running 104.lbm train base cuda default [Tue Dec 10 19:43:36 2013]
  Running 110.fft train base cuda default [Tue Dec 10 19:44:44 2013]
  Running 112.spmv train base cuda default [Tue Dec 10 19:44:58 2013]
  Running 114.mriq train base cuda default [Tue Dec 10 19:45:02 2013]
  Running 116.histo train base cuda default [Tue Dec 10 19:45:05 2013]
  Running 117.bfs train base cuda default [Tue Dec 10 19:45:06 2013]
  Running 118.cutcp train base cuda default [Tue Dec 10 19:45:20 2013]
  Running 120.kmeans train base cuda default [Tue Dec 10 19:45:21 2013]
  Running 121.lavamd train base cuda default [Tue Dec 10 19:45:25 2013]
  Running 122.cfd train base cuda default [Tue Dec 10 19:45:27 2013]
  Running 123.nw train base cuda default [Tue Dec 10 19:46:34 2013]
  Running 124.hotspot train base cuda default [Tue Dec 10 19:46:37 2013]
  Running 125.lud train base cuda default [Tue Dec 10 19:46:48 2013]
  Running 126.ge train base cuda default [Tue Dec 10 19:46:52 2013]
  Running 127.srad train base cuda default [Tue Dec 10 19:46:56 2013]
  Running 128.heartwall train base cuda default [Tue Dec 10 19:46:59 2013]
  Running 140.bplustree train base cuda default [Tue Dec 10 19:47:03 2013]
Running Benchmarks
  Running (#1) 001.systest ref base cuda default [Tue Dec 10 19:47:20 2013]
  Running (#1) 101.tpacf ref base cuda default [Tue Dec 10 19:47:21 2013]
  Running (#1) 103.stencil ref base cuda default [Tue Dec 10 19:48:36 2013]
  Running (#1) 104.lbm ref base cuda default [Tue Dec 10 19:50:14 2013]
  Running (#1) 110.fft ref base cuda default [Tue Dec 10 19:51:19 2013]
  Running (#1) 112.spmv ref base cuda default [Tue Dec 10 19:52:41 2013]
  Running (#1) 114.mriq ref base cuda default [Tue Dec 10 19:54:22 2013]
  Running (#1) 116.histo ref base cuda default [Tue Dec 10 19:55:16 2013]
  Running (#1) 117.bfs ref base cuda default [Tue Dec 10 19:56:51 2013]
  Running (#1) 118.cutcp ref base cuda default [Tue Dec 10 19:58:09 2013]
  Running (#1) 120.kmeans ref base cuda default [Tue Dec 10 19:58:48 2013]
  Running (#1) 121.lavamd ref base cuda default [Tue Dec 10 20:00:29 2013]
  Running (#1) 122.cfd ref base cuda default [Tue Dec 10 20:01:49 2013]
  Running (#1) 123.nw ref base cuda default [Tue Dec 10 20:03:17 2013]
  Running (#1) 124.hotspot ref base cuda default [Tue Dec 10 20:04:43 2013]
  Running (#1) 125.lud ref base cuda default [Tue Dec 10 20:06:03 2013]
  Running (#1) 126.ge ref base cuda default [Tue Dec 10 20:07:50 2013]
  Running (#1) 127.srad ref base cuda default [Tue Dec 10 20:08:56 2013]
  Running (#1) 128.heartwall ref base cuda default [Tue Dec 10 20:10:13 2013]
  Running (#1) 140.bplustree ref base cuda default [Tue Dec 10 20:12:52 2013]
  Running (#2) 001.systest ref base cuda default [Tue Dec 10 20:15:17 2013]
  Running (#2) 101.tpacf ref base cuda default [Tue Dec 10 20:15:18 2013]
  Running (#2) 103.stencil ref base cuda default [Tue Dec 10 20:16:32 2013]
  Running (#2) 104.lbm ref base cuda default [Tue Dec 10 20:18:10 2013]
  Running (#2) 110.fft ref base cuda default [Tue Dec 10 20:19:15 2013]
  Running (#2) 112.spmv ref base cuda default [Tue Dec 10 20:20:38 2013]
  Running (#2) 114.mriq ref base cuda default [Tue Dec 10 20:22:19 2013]
  Running (#2) 116.histo ref base cuda default [Tue Dec 10 20:23:13 2013]
  Running (#2) 117.bfs ref base cuda default [Tue Dec 10 20:24:48 2013]
  Running (#2) 118.cutcp ref base cuda default [Tue Dec 10 20:26:06 2013]
  Running (#2) 120.kmeans ref base cuda default [Tue Dec 10 20:26:45 2013]
  Running (#2) 121.lavamd ref base cuda default [Tue Dec 10 20:28:30 2013]
  Running (#2) 122.cfd ref base cuda default [Tue Dec 10 20:29:50 2013]
  Running (#2) 123.nw ref base cuda default [Tue Dec 10 20:31:19 2013]
  Running (#2) 124.hotspot ref base cuda default [Tue Dec 10 20:32:44 2013]
  Running (#2) 125.lud ref base cuda default [Tue Dec 10 20:34:04 2013]
  Running (#2) 126.ge ref base cuda default [Tue Dec 10 20:35:52 2013]
  Running (#2) 127.srad ref base cuda default [Tue Dec 10 20:36:58 2013]
  Running (#2) 128.heartwall ref base cuda default [Tue Dec 10 20:38:15 2013]
  Running (#2) 140.bplustree ref base cuda default [Tue Dec 10 20:40:54 2013]
  Running (#3) 001.systest ref base cuda default [Tue Dec 10 20:43:18 2013]
  Running (#3) 101.tpacf ref base cuda default [Tue Dec 10 20:43:20 2013]
  Running (#3) 103.stencil ref base cuda default [Tue Dec 10 20:44:34 2013]
  Running (#3) 104.lbm ref base cuda default [Tue Dec 10 20:46:12 2013]
  Running (#3) 110.fft ref base cuda default [Tue Dec 10 20:47:17 2013]
  Running (#3) 112.spmv ref base cuda default [Tue Dec 10 20:48:41 2013]
  Running (#3) 114.mriq ref base cuda default [Tue Dec 10 20:50:21 2013]
  Running (#3) 116.histo ref base cuda default [Tue Dec 10 20:51:15 2013]
  Running (#3) 117.bfs ref base cuda default [Tue Dec 10 20:52:50 2013]
  Running (#3) 118.cutcp ref base cuda default [Tue Dec 10 20:54:08 2013]
  Running (#3) 120.kmeans ref base cuda default [Tue Dec 10 20:54:47 2013]
  Running (#3) 121.lavamd ref base cuda default [Tue Dec 10 20:56:28 2013]
  Running (#3) 122.cfd ref base cuda default [Tue Dec 10 20:57:48 2013]
  Running (#3) 123.nw ref base cuda default [Tue Dec 10 20:59:17 2013]
  Running (#3) 124.hotspot ref base cuda default [Tue Dec 10 21:00:43 2013]
  Running (#3) 125.lud ref base cuda default [Tue Dec 10 21:02:02 2013]
  Running (#3) 126.ge ref base cuda default [Tue Dec 10 21:03:50 2013]
  Running (#3) 127.srad ref base cuda default [Tue Dec 10 21:04:56 2013]
  Running (#3) 128.heartwall ref base cuda default [Tue Dec 10 21:06:13 2013]
  Running (#3) 140.bplustree ref base cuda default [Tue Dec 10 21:08:51 2013]

The above order can be summarized as:

test
train 
ref1, ref2, ref3

Sometimes, it can be useful to understand when directory setup occurs. So, let's expand the list to include setup:

setup for test
test 
setup for train
train 
setup for ref
ref1, ref2, ref3

3.2.3 Run order when more than one tuning is present

If you run both base and peak tuning, base is always run first. If you do a reportable run with both base and peak, the order is:

setup for test
base test, peak test
setup for train
base train, peak train
setup for ref
base ref1, base ref2, base ref3
peak ref1, peak ref2, peak ref3

3.3 Actions

When runspec is used, it normally (*) takes some kind of action for the set of benchmarks specified at the end of the command line (or defaulted from the config file). The default action is validate, which means that the benchmarks will be built if necessary, the run directories will be set up, the benchmarks will be run, and reports will be generated.

(*) Exception: if you use the --rawformat switch, then --action is ignored.

If you want to cause a different action, then you can enter one of the following runspec options:

--action buildCompile the benchmarks. More information about compiling may be found in config.html, including information about additional files that are output during a build.
--action buildsetupSet up build directories for the benchmarks, but do not attempt to compile them.
--action configpp Preprocess the config file and dump it to stdout
--action onlyrunRun the benchmarks but do not bother to verify that they got the correct answers. Reports are always marked "invalid", since the correctness checks are skipped. Therefore, this option is rarely useful, but it can be selected if, for example, you are generating a performance trace and wish to avoid tracing some of the tools overhead.
--action report Synonym for --fakereport; see also --fakereportable.
--action runSynonym for --action validate.
--action runsetupSynonym for --action setup
--action setupSet up the run directories. Copy executables and data to work directories.
--action validateBuild (if needed), run, check for correct answers, and generate reports.

In addition, the following cleanup actions are available (in order by level of vigor):

--action cleanEmpty all run and build directories for the specified benchmark set for the current user. For example, if the current OS username is set to jeff and this command is entered:
D:\accel\> runspec --action clean --config may12a opencl
then the tools will remove run directories with username jeff for benchmarks generated by config file may12a.cfg (in nnn.benchmark\run and nnn.benchmark\build).
--action clobber Clean + remove all executables of the current type for the specified benchmark set.
--action trash Same as clean, but do it for all users of this SPEC directory tree, and all types, regardless of what's in the config file.
--action realclean A synonym for --action trash
--action scrub Remove everybody's run and build directories and all executables for the specified benchmark set.

Alternative cleaning method:

If you prefer, you can clean disk space by entering commands such as the following (on Unix systems):

      rm -Rf $SPEC/benchspec/A*/*/build
      rm -Rf $SPEC/benchspec/A*/*/run
      rm -Rf $SPEC/benchspec/A*/*/exe

Notes

  1. The above commands not only empty the contents of the run and exe directories; they also delete the directories themselves. That's fine; the tools will re-create the run and exe directories if they are needed again later on.

  2. The above commands do NOT clean the build directories (unless you've set build_in_build_dir=0). Often, it's useful to preserve the build directories for debugging purposes, but if you'd like to get rid of them too, just add $SPEC/benchspec/A*/*/build to your list of directories.

Windows users:

Windows users can achieve a similar effect using Windows Explorer.

I have so much disk space, I'll never use all of it:

Run directories are automatically re-used for subsequent runs. If you prefer, you can ask the tools to never touch a used run directory. Do this by setting the environment variable:

     SPEC_ACCEL_NO_RUNDIR_DEL

In this case, you should be prepared to do frequent cleaning, perhaps after reviewing the results of each run.

3.4 Commonly used options

Most users of runspec will want to become familiar with the following options.

--action action

--check_version

--config name

--device n

--flagsurl URL[,URL...]

--help

--ignore_errors

--iterations number

--loose

--output_format format

all implies all of the following except screen, check, and mail
cfg|config|conffile
configfile|cfgfile
config file used for this run (e.g. ACCEL_OCL.030.ref.cfg)
check|chk|sub|
subcheck|subtest|test
Submission syntax check (automatically enabled for reportable runs). Causes many fields to be checked for acceptable formatting - e.g. hardware available "Nov-2012", not "11/12"; memory size "4 GB", not "4Gb"; and so forth. SPEC enforces consistent syntax for results submitted to its website as an aid to readers of results, and to increase the likelihood that queries find the intended results. If you select --output_format subcheck on your local system, you can find out about most formatting problems before you submit your results to SPEC. Even if you don't plan to submit your results to SPEC, the Submission Check format can help you create reports that are more complete and readable.
csv|spreadsheet

Comma-separated variable (e.g. OMPG2012.030.ref.csv). If you populate spreadsheets from your runs, you probably shouldn't be doing cut/paste of text files; you'll get more accurate data by using --output_format csv.

CSV output includes much of the information in the other reports. All runs times are included, and the selected run times are listed separately. The flags used are also included.

default implies HTML and text
flag|flags Flag report (e.g. ACCEL_OCL.030.flags.ref.html). Will also be produced when formats that use it are requested (PDF, HTML).
html|xhtml|www|web web page (e.g. ACCEL_OCL.030.ref.html)
mail|mailto|email All generated reports will be sent to an address specified in the config file.
pdf|adobe Portable Document Format (e.g. ACCEL_OCL.030.pdf). This format is the design center for SPEC ACCEL reporting. Other formats contain less information: text lacks graphs, postscript lacks hyperlinks, and HTML is less structured. (It does not appear as part of "default" only because some systems may lack the ability to read PDF.)
postscript|ps|
printer|print
PostScript (e.g. ACCEL_OCL.030.ref.ps)
raw|rsf raw results, e.g. ACCEL_OCL.030.ref.rsf. Note: you will automatically get an rsf file for commands that run a test or that update a result (such as rawformat --flagsurl).
screen|scr|disp|
display|terminal|term
ASCII text output to stdout.
text|txt|ASCII|asc ASCII text, e.g. ACCEL_OCL.030.ref.txt.

--platform "name"

--power --nopower

--rawformat rawfiles

--rebuild

--reportable

--tune tuning

3.5 Less commonly used options

--basepeak [bench,bench,...]

--nobuild

--comment "comment text"

--define SYMBOL[=VALUE]
--define SYMBOL:VALUE

--delay secs

--deletework

--extension name[,name...]

--fake

--fakereport

--fakereportable

--[no]feedback

--[no]graph_auto

--graph_max N

--graph_min N

--http_proxy proxy[:port]

--http_timeout N

--info_wrap_columns N

--[no]keeptmp

--[no]log_timestamp

--machine name[,name...]

--make_no_clobber

--make_bundle name

--notes_wrap_columns N

--preenv, --nopreenv

--review, --noreview

--setprocgroup, --nosetprocgroup

--size size[,size...]

--test

--table, --notable

--train_with WORKLOAD

--undef SYMBOL

--update

--unpack_bundle name

--use_bundle name

--username name

--verbose n

--version


4 Quick reference

(This table is organized alphabetically, without regard to upper/lower case, and without regard to the presence of a leading "no").

-a ACTION Same as --action ACTION
--action ACTION Do: build|buildsetup|clean|clobber|configpp|onlyrun|realclean|report|run|runsetup|scrub|setup|trash|validate
--basepeak Copy base results to peak (use with --rawformat)
--nobuild Do not attempt to build binaries
-c FILE Same as --config FILE
--check_version Check whether an updated version of ACCEL is available
--comment "text"Add a comment to the log and the stored configfile.
--config file Set config file for runspec to use
-D Same as --rebuild
-d Same as --deletework
--debug LEVEL Same as --verbose LEVEL
--define SYMBOL[=VALUE] Define a config preprocessor macro
--delay secs Add delay before and after benchmark invocation
--deletework Force work directories to be rebuilt
--device n Select the device number or type to run the test on.
--dryrun Same as --fake
--dry-run Same as --fake
-e EXT[,EXT...] Same as --extension EXT[,EXT...]
--ext=EXT[,EXT...] Same as --extension EXT[,EXT...]
--extension ext[,ext...] Set the extensions
-F URL Same as --flagsurl URL
--fake Show what commands would be executed.
--fakereport Generate a report without compiling codes or doing a run.
--fakereportable Generate a fake report as if "--reportable" were set.
--[no]feedback Control whether builds use feedback directed optimization
--flagsurl-URL url Use the file at URL as a flags description file.
--graph_auto Let the tools pick minimum and maximum for the graph
--graph_min N Set the minimum for the graph
--graph_max N Set the maximum for the graph
-h Same as --help
--help Print usage message
--http_proxy Specify the proxy for internet access
--http_timeout Timeout when attempting http access
-I Same as --ignore_errors
-i SET[,SET...] Same as --size SET[,SET...]
--ignore_errors Continue with benchmark runs even if some fail
--ignoreerror Same as --ignore_errors
--info_wrap_column NSet wrap width for non-notes informational items
--infowrap N Same as --info_wrap_column N
--input SET[,SET...] Same as --size SET[,SET...]
--iterations N Run each benchmark N times
--keeptmp Keep temporary files
-l Same as --loose
--loose Do not produce a reportable result
--noloose Same as --reportable
-m NAME[,NAME...} Same as --machine NAME[,NAME...]
-M Same as --make_no_clobber
--mach NAME[,NAME...] Same as --machine NAME[,NAME...]
--machine name[,name...] Set the machine types
--make_bundle Create a package of binaries and config file
--make_no_clobber Do not delete existing object files before building.
--mockup Same as --fakereportable
-n N Same as --iterations N
-N Same as --nobuild
--notes_wrap_column NSet wrap width for notes lines
-noteswrap N Same as --notes_wrap_column N
-o FORMAT[,...] Same as --output_format FORMAT[,...]
--output_format format[,format...] Generate: all|cfg|check|csv|flags|html|mail|pdf|ps|raw|screen|text
--platform "name" Select the platform to run on
--[no]power Control poiwer measurement during run
--[no]preenv Allow environment settings in config file to be applied
-R Same as --rawformat
--rawformat Format raw file
--rebuild Force a rebuild of binaries
--reportonly Same as --fakereport
--noreportable Same as --loose
--reportable Produce a reportable result
--[no]review Format results for review
-s Same as --reportable
-S SYMBOL[=VALUE] Same as --define
-S SYMBOL:VALUE Same as --define
--[no]setprocgroup [Don't] try to create all processes in one group.
--size size[,size...] Select data set(s): test|train|ref
--strict Same as --reportable
--nostrict Same as --loose
-T TUNE[,TUNE] Same as --tune TUNE[,TUNE]
--[no]table Do [not] include a detailed table of results
--test Run various perl validation tests on specperl
--tune Set the tuning levels to one of: base|peak|all
--tuning Same as --tune
--undef SYMBOL Remove any definition of this config preprocessor macro
-U NAME Same as --username NAME
--unpack_bundle Unpack a package of binaries and config file
--use_bundle Use a package of binaries and config file
--update Check www.spec.org for updates to benchmark and example flag files, and config files
--update_flags Same as --update
--username Name of user to tag as owner for run directories
-v Same as --verbose
--verbose Set verbosity level for messages to N
-V Same as --version
--version Output lots of version information
-? Same as --help

Copyright 2014-2017 Standard Performance Evaluation Corporation

All Rights Reserved