SPEC ACCEL Frequently Asked Questions (FAQ)

This document has frequently asked technical questions and answers. The latest version of this document may be found at http://www.spec.org/accel/Docs/faq.html.

If you are looking for the list of known problems with SPEC ACCEL, please see http://www.spec.org/accel/Docs/errata.html.

Contents

Requirements

Require.01 How much memory do I need?

Require.02 Does this work with Windows?

Require.03 What software do I need?

Installation

Install.01 ./install.sh: /bin/sh: bad interpreter: Permission denied

Install.02 The DVD drive is on system A, but I want to install on system B. What do I do?

Install.03 Do I need to be root?

runspec

runspec.01 Can't locate strict.pm

runspec.02 specperl: bad interpreter: No such file or directory

runspec.03 Do I need to be root?

Building benchmarks

Build.01 Why is it rebuilding the benchmarks?

Setting up

Setup.01 hash doesn't match after copy

Setup.02 Copying executable failed

Running benchmarks

Run.01 Why does this benchmark take so long to run?

Run.02 Why was there this cryptic message from the operating system?

Run.03 What happens with the compilation of accelerator code?

Run.04 Can I run on a 32-bits system?

Run.05 My runtimes vary quite a lot. Is there a way to fix it?

Run.06 The benchmarks 112.spmv and/or 120.kmeans fail on my system. Is my OpenCL device bad?

Run.07 How do I run on a particular device?

Run.08 How do I know what devices are available?

Run.09 Benchmark 370.bt keeps failing on me with a runtime error.

Miscompares

Miscompare.01 I got a message about a miscompare

Miscompare.02 The benchmark took less than 1 second

Miscompare.03 The .mis file says "short"

Miscompare.04 My compiler is generating bad code!

Miscompare.05 The code is bad even with low optimization!

Miscompare.06 The .mis file is just a bunch of numbers.

Results reporting

Results.01 It's hard to cut/paste into my spreadsheet

Results.02 What is a "flags file"? What does Unknown Flags mean?

Results.03 Submission Check -> FAILED

Results.04 Why does the report have an (*) that says ...

Power

Power.01 What is the power component of SPEC ACCEL?

Power.02 Am I required to run power?

Power.03 How do I measure power?

Power.04 What kind of power analyzer do I need?

Power.05 Is is possible to get all of the power sample data?

Power.06 What settings are required for the power analyzer?

Power.07 Can I use autoranging?

Power.08 The runspec command caused uncertainty errors, what can I do?

Power.09Can I use more than one power analyzer?

Temperature

Temperature.01I got an error about it being too cold, what can I do?

Requirements

Require.01 q. How much memory do I need?

The system requirements may be found in the document system-requirements.html. Currently, the minimum amount of accelerator memory is 2 GB, and 4 GB for the host.

Require.02 q. Does this work with Windows?

The SPEC ACCEL suite has been tested on a number of platforms, but Windows is not one of them. Because of how this benchmark shares components with SPEC CPU benchmarks, it is possible that it might work on Windows. If you buy this benchmark and expect it to work on Windows, SPEC will not be able to support you because it is not a supported operating system.

Require.03 q. What software do I need?

The system requirements may be found in the document system-requirements.html. If you want to test the OpenCL suite, you will need OpenCL on your system. If you want to test the OpenACC suite, you will need a compiler that accepts OpenACC. If you want to test the OpenMP suite, you will need a compiler that accepts OpenMP 4.5.

Installation

Install.01 q. Why am I getting a message such as "./install.sh: /bin/sh: bad interpreter: Permission denied"?

a. If you are installing from a DVD you created, check to be sure that your operating system allows programs to be executed from the DVD. For example, some Linux man pages for mount suggest setting the properties for the CD or DVD drive in /etc/fstab to "/dev/cdrom /cd iso9660 ro,user,noauto,unhide", which is notably missing the property "exec". Add exec to that list in /etc/fstab, or add it to your mount command. Notice that the sample Linux mount command in install-guide-unix.html does include exec.

Perhaps install.sh lacks permission to run because you tried to copy all the files from the DVD, in order to move them to another system. If so, please don't do that. There's an easier way. See the next question.

Install.02 q. The DVD drive is on system A, but I want to install on system B. What do I do?

a. The installation guides have an appendix just for you, which describe installing from the network or installing from a tarfile. See Appendix 1 in install-guide-unix.html or install-guide-windows.html.

Install.03 Do I need to be root?

Occasionally, users of Unix systems have asked whether it is necessary to elevate privileges, or to become 'root', prior to installing or running SPEC ACCEL.

a. SPEC recommends (*) that you do not become root, because: (1) To the best of SPEC's knowledge, no component of SPEC ACCEL needs to modify system directories, nor does any component need to call privileged system interfaces. (2) Therefore, if you find that it appears that there is some reason why you need to be root, the cause is likely to be outside the SPEC toolset - for example, disk protections, or quota limits. (3) For safe benchmarking, it is better to avoid being root, for the same reason that it is a good idea to wear seat belts in a car: accidents happen, humans make mistakes. For example, if you accidentally type:

kill 1

when you meant to say:

kill %1

then you will very grateful if you are not privileged at that moment.

(*) This is only a recommendation, not a requirement nor a rule.

runspec

runspec.01 q. When I say runspec, why does it say Can't locate strict.pm? For example:

Can't locate strict.pm in @INC (@INC contains: .) at runspec line 28.
BEGIN failed--compilation aborted at runspec line 28.

a. You can't use runspec if its path is not set correctly. On Unix, Linux, or Mac OS X, you should source shrc or cshrc, as described in runspec.html section 2.4. For Windows, please edit shrc.bat and make the adjustments described in the comments. Then, execute that file, as described in runspec.html section 2.5.

runspec.02 q. Why am I getting messages about specperl: bad interpreter? For example:

bash: /specaccel.new/bin/runspec: /specaccel/bin/specperl: bad interpreter: No such file or directory

a. Did you move the directory where runspec was installed? If so, you can probably put everything to rights, just by going to the new top of the directory tree and typing "bin/relocate".

For example, the following unwise sequence of events is repaired after completion of the final line.

Top of SPEC benchmark tree is '/specaccel'
Everything looks okay.  cd to /specaccel, source the shrc file and have at it!
$ cd /specaccel
$ . ./shrc
$ cd ..
$ mv specaccel specaccel.new
$ runspec -h | head
bash: runspec: command not found
$ cd specaccel.new/
$ . ./shrc
$ runspec --help | head
bash: /specaccel.new/bin/runspec: /specaccel/specperl: bad interpreter: No such file or directory
$ bin/relocate

runspec.03 Do I need to be root?

Regarding the root account, the answer for runspec is the same as the answer for installation question #3, above.

Building benchmarks

Build.01 q. Why is it rebuilding the benchmarks?

a. You changed something, and the tools thought that it might affect the generated binaries. See the section about automatic rebuilds in the config.html document.

Setting up

Setup.01 q. What does hash doesn't match after copy mean?

I got this strange, difficult to reproduce message:
    hash doesn't match after copy ... in copy_file (1 try total)! Sleeping 2 seconds...
followed by several more tries and sleeps. Why?

a. During benchmark setup, certain files are checked. If they don't match what they are expected to, you might see this message. Check:

If the condition persists, try turning up the verbosity level. Look at the files with other tools; do they exist? Can you see differences? Try a different disk and controller. And, check for the specific instance of this message described in the next question.

Setup.02q. Why does it say ERROR: Copying executable failed?

I got this strange, difficult to reproduce message:
    ERROR: Copying executable to run directory FAILED
or
    ERROR: Copying executable from build dir to exe dir FAILED!
along with the bit about hashes not matching from the previous question. Why?

a. Perhaps you have attempted to build the same benchmark twice in two simultaneous jobs.

On most operating systems, the SPEC tools don't mind concurrent jobs. They use your operating system's locking facilities to write the correct outputs to the correct files, even if you fire off many runspec commands at the same time.

But there's one case of simultaneous building that is difficult for the tools to defend against: please don't try to build the very same executable from two different jobs at the same time. Notice that if you say something like this:

$ tail myconfig.cfg
350.md=peak:
basepeak=yes
$ runspec --config myconfig --size test --tune base 350.md &
$ runspec --config myconfig --size test --tune peak 350.md &

then you are trying to build the same benchmark twice in two different jobs, because of the presence of basepeak=yes. Please don't try to do that.

Running benchmarks

Run.01 q. Why does this benchmark suite take so long to run?

Please understand that the suite has been designed to do many things and be useful for at least several years. Benchmarks that seem slow today probably will not seem slow at the end of life of the suite. In addition, benchmarks could be compute-intensive or memory-intensive. Especially when they are memory-intensive please check with the compiler vendor if there are any specific memory policy related flags that needs to be turned on to maximize performance.

Run.02 q. Why was there this cryptic message from the operating system?

If you are getting cryptic, hard-to-reproduce, unpredictable error messages from your system, one possible reason may be that the benchmarks consume substantial resources, of several types. If an OS runs out of some resource - for example, pagefile space, or process heap space - it might not give you a very clear message. Instead, you might see only a very brief message, or a dialog box with a hex error code in it. Please see the hints and suggestions in the section about resources in system-requirements.html.

Run.03 q. Is the compilation time of the accelerator code included in the total execution time ?

Yes, the time needed for the compilation of accelerator code is included in the total execution time.

Run.04 q. Can I run on a 32-bits system?

The benchmarks have been tested extensively as 64-bit binaries on a range of systems, but not as 32-bit binaries. You are however welcome to run them as 32-bit binaries subject to the restrictions in sections 2.2.3, 2.2.4 and 2.3.6 of the run rules. SPEC does not guarantee any particular memory size for the benchmarks, nor that they will necessarily fit on all systems that are described as 32-bit. The SPEC HPG committee is unlikely to accommodate any source-code changes enabling a benchmark to run as a 32-bit binary.

Run.05 q. My runtimes vary quite a lot. Is there a way to fix it?

This usually happens on multi-socket systems when your host process runs on a different socket from your accelerator. Try pinning runspec to the right socket using the NUMA tool of your choice.

Run.06 q. The benchmarks 112.spmv and/or 120.kmeans fail on my system. Is my OpenCL device bad?

These 2 benchmarks are known to require large stack size. Check the stack limit in your environment, for instance ulimit -s command. If it's not unlimited, modify it using ulimit -s unlimited (or use largest allowed value) before doing runspec.

Run.07 q. How do I run on a particular device?

You can select which device you want to run on either in your configuration file or from the runspec command line. The benchmark tools take this information and pass it on to the appropriate methods for OpenCL and OpenACC. Note that sometimes, the OpenCL or OpenACC tools are implementation dependent on what they do with this information.

To set this information from runspec, see the documentation for --device and --platform. To set this information in the configuration file, see the documentation for device and platform.

Run.08 q. How do I know what devices are available?

For OpenACC, check with the tools provided by your OpenACC compiler.

For OpenCL, first check to see what tools are provide with your OpenCL environment. You can also use the 001.systest benchmark. It uses OpenCL to figure out what things it can see. Below is an example to show how to check what devices the benchmark is able to discover. At the end is listed what it discovered.

$ runspec -I -i test -c Example-opencl-nvidia-simple.cfg -n 1 -T base 001

runspec v2380 - Copyright 1999-2014 Standard Performance Evaluation Corporation
Using 'linux-suse10-amd64' tools
Reading MANIFEST... 13900 files
Loading runspec modules................
Locating benchmarks...found 36 benchmarks in 11 benchsets.

=============================================================================
Warning:  You appear to be using one of the config files that is supplied
with the SPEC ACCEL distribution.  This can be a fine way to get started.

Each config file was developed for a specific combination of compiler / OS /
hardware.  If your platform uses different versions of the software or
hardware listed, or operates in a different mode (e.g. 32- vs. 64-bit mode),
there is the possibility that this configuration file may not work as-is. If
problems arise please see the technical support file at

  http://www.spec.org/accel/Docs/techsupport.html

A more recent config file for your platform may be among result submissions at

  http://www.spec.org/accel/ 

Generally, issues with compilation should be directed to the compiler vendor.
You can find hints about how to debug problems by looking at the section on
"Troubleshooting" in
  http://www.spec.org/accel/Docs/config.html

This warning will go away if you rename your config file to something other
than one of the names of the presupplied config files.

==================== The run will continue in 30 seconds ====================
Reading config file '/export/bmk/accel/config/Example-opencl-nvidia-simple.cfg'
Running "specperl /export/bmk/accel/Docs/sysinfo" to gather system information.
Benchmarks selected: 001.systest
Compiling Binaries
  Building 001.systest base compsys default: (build_base_compsys.0000) [Tue Jan 28 08:57:01 2014]

Build successes: 001.systest(base)

Setting Up Run Directories
  Setting up 001.systest test base compsys default: created (run_base_test_compsys.0000)
Running Benchmarks
  Running 001.systest test base compsys default [Tue Jan 28 08:57:01 2014]
Success: 1x001.systest
Producing Raw Reports
mach: default
  ext: compsys
    size: test
      set: opencl
      set: openacc

The log for this run is in /export/bmk/accel/result/ACCEL.002.log

runspec finished at Tue Jan 28 08:57:03 2014; 34 total seconds elapsed

$ cat /export/bmk/accel/benchspec/ACCEL/001.systest/run/run_base_test_compsys.0000/systest.out
********************************************************
DETECTED OPENCL PLATFORMS AND DEVICES:
--------------------------------------------------------
PLATFORM = NVIDIA CUDA, OpenCL 1.1 CUDA 6.0.1 (SELECTED)
  + 0: Quadro FX 5800, v 331.20 (SELECTED)
********************************************************
Kernel    : 0.000036
Copy      : 0.000263
Driver    : 0.000036
Compute   : 0.734623
CPU/Kernel Overlap: 0.000047
Timer Wall Time: 0.735231

Run.09 q. Benchmark 370.bt keeps failing on me with a runtime error.

On some Linux distributions, 370.bt may require more stack space than is default. Please increase the stack size or set it to unlimited and try again.

Miscompares

Miscompare.01 I got a message about a miscompare. The tools said something like:

Running Benchmarks
  Running 350.md ref base 12.3 default 
/spec/accel/bin/specinvoke -d /spec/accel/benchspec/ACCEL/350.md/run/run_base_ref_12.3.0000 
-e speccmds.err -o speccmds.stdout -f speccmds.cmd -C -q
/spec/accel/bin/specinvoke -E -d /spec/accel/benchspec/ACCEL/350.md/run/run_base_ref_12.3.0000 
-c 1 -e compare.err -o compare.stdout -f compare.cmd -k

*** Miscompare of md.log.01228060000; for details see
    /spec/accel/benchspec/ACCEL/350.md/run/run_base_ref_12.3.0000/md.log.01228060000.mis
Error: 1x350.md
Producing Raw Reports
mach: default
  ext: 12.3
    size: ref
      set: openacc

Why did it say that? What's the problem?

a. We don't know. Many things can cause a benchmark to miscompare, so we really can't tell you exactly what's wrong based only on the fact that a miscompare occurred.

But don't panic.

Please notice, if you read the message carefully, that there's a suggestion of a very specific file to look in. It may be a little hard to read if you have a narrow terminal window, as in the example above, but if you look carefully you'll see that it says:

*** Miscompare of md.log.01228060000; for details see
    /spec/accel/benchspec/ACCEL/350.md/run/run_base_ref_12.3.0000/md.log.01228060000.mis

Now's the time to look inside that file. Simply doing so may provide a clue as to the nature of your problem.

On Unix systems, change your current directory to the run directory using the path mentioned in the message, for example:

cd /spec/accel/benchspec/ACCEL/350.md/run/run_base_ref_12.3.0000

On Microsoft Windows systems, remember to turn the slashes backwards in your cd command.

Then, have a look at the file that was mentioned, using your favorite text editor. If the file does not exist, then check your paths, and check to see whether you have run out of disk space.

Miscompare.02 The benchmark ran, but it took less than 1 second and there was a miscompare. Help!

a. If the benchmark took less than 1 second to execute, it didn't execute properly. There should be one or more .err files in the run directory which will contain some clues about why the benchmark failed to run. Common causes include libraries that were used for compilation but not available during the run, executables that crash with access violations or other exceptions, and permissions problems. See also the suggestions in the next question.

Miscompare.03 I looked in the .mis file and it said something like:

   'md.log.01228060000' short

What does "short" mean?

a. If a line like the above is the only line in the .mis file, it means that the benchmark failed to produce any output. In this case, the corresponding error file (look for files with .err extensions in the run directory) may have a clue. In this case, it was Segmentation Fault - core dumped. For problems like this, the first things to examine are the portability flags used to build the benchmark.

Have a look at the sample config files in $SPEC/config or, on Windows, %SPEC%\config. If you constructed your own config file based on one of those, maybe you picked a starting point that was not really appropriate (e.g. you picked a 32-bit config file but are using 64-bit compilation options). Have a look at other samples in that directory. Check at www.spec.org/accel to see if there have been any result submissions using the platform that you are trying to test. If so, compare your portability flags to the ones in the the config files for those results.

If the portability flags are okay, your compiler may be generating bad code.

Miscompare.04 My compiler is generating bad code! Help!

a. Try reducing the optimization that the compiler is doing. Instructions for doing this will vary from compiler to compiler, so it's best to ask your compiler vendor for advice if you can't figure out how to do it for yourself.

Miscompare.05 My compiler is generating bad code with low or no optimization! Help!

a. If you're using a beta compiler, try dropping down to the last released version, or get a newer copy of the beta. If you're using a version of GCC that shipped with your OS, you may want to try getting a "vanilla" (no patches) version and building it yourself.

Miscompare.06 I looked in the .mis file and it was just full of a bunch of numbers.

a. In this case, the benchmark is probably running, but it's not generating answers that are within the tolerances set. See the suggestions for how to deal with compilers that generate bad code in the previous two questions. In particular, you might see if there is a way to encourage your compiler to be careful about optimization of floating point expressions.

Results reporting

Results.01 q. It's hard to cut/paste into my spreadsheet

a. Please don't do that. With SPEC ACCEL, there's a handy .csv format file right next to the other result formats on the index page. Or, you can go up to the top of your browser and change the .pdf (or .whichever) to .csv

Results.02 q. What is a "flags file"? What does the message Unknown Flags mean in a report?

a. SPEC ACCEL provides benchmarks in source code form, which are compiled under control of SPEC's toolset. Compilation flags (such as -O5 or -unroll) are detected and reported by the tools with the help of flag description files. Therefore, to do a complete run, you need to (1) point to an existing flags file (easy) or (2) modify an existing flags file (slightly harder) or (3) write one from scratch (definitely harder).

  1. Find an existing flags file by noticing the address of the .xml file at the bottom of any result published at www.spec.org/accel. You can use the --flagsurl switch to point your own runspec command at that file, or you can reference it from your config file with the flagsurl option. For example:
       runspec --config=myconfig --flagsurl=http://www.spec.org/accel/flags/sun-studio.xml int
  2. You can download the .xml flags file referenced at the bottom of any published result at www.spec.org/accel. Warning: clicking on the .xml link may just confuse your web browser; it's probably better to use whatever methods your browser provides to download a file without viewing it - for example, control-click in Safari, right click in Internet Explorer. Then, look at it with a text editor.
  3. You can write your own flags file by following the instructions in flag-description.html.

Notice that you do not need to re-run your tests if the only problem was Unknown flags. You can just use runspec --rawformat --flagsurl

Results.03 q. What's all this about Submission Check -> FAILED littering my log file and my screen?

At the end of my run, why did it print something like this?

format: Submission Check -> FAILED.  Found the following errors:
        - The "hw_memory" field is invalid.
            It must contain leading digits, followed by a space,
            and a standard unit abbreviation.  Acceptable
            abbreviations are KB, MB, GB, and TB.
           The current value is "20480 Megabytes".

a. A complete, reportable result has various information filled in for readers. These fields are listed in the table of contents for config.html. If you wish to submit a result to SPEC for publication at www.spec.org/accel, these fields not only have to be filled in; they also have to follow certain formats. Although you are not required to submit your result to SPEC, for convenience the tools try to tell you as much as they can about how the result should be improved if you were to submit it. In the above example, the tools would stop complaining if the field hw_memory said something like "20 GB" instead of "20480 Megabytes".

Notice that you can repair minor formatting problems such as these without doing a re-run of your tests. You are allowed to edit the rawfile, as described in utility.html.

Results.04 q. Why does the report have an (*) that says ...

The report has a line that says

(*) Indicates compiler flags found in non-compiler variables

What does this mean, how do I make it go away.

a. There are potentially a number of errors that will show up like this. They usually mean that you have a conflict of some kind between flags file specifications and how you ran. If you specify a compiler flag that is listed as portability but put it in a config file variable for optimization, the reporter will notice this and warn you about a potential problem. Unfortunately, many of these kinds of problems require a rerun to make everything report nicely. Sometimes you can get lucky and fix your flags file to make the error go away. So the first thing to look at is your flags file. If that isn't the issue and you have a config file issue, you will need to rerun to make the error go away.

Power

Power.01 q. What is the power component of SPEC ACCEL?

a. A power measurement component is optional feature that allows the power consumed during the benchmark to be reported.

Power.02 q. Am I required to run power?

a. The measurement of power is optional. It requires a methology that SPEC has been developing starting with the SPECpower_ssj2008 benchmark.

Power.03 q. How do I measure power?

a. Please see the runrules as a starting point for what is required to run and measure power. In order to create a submittable run, you will need an approved power analyzer which has been calibrated within the past year by an appropriate testing organization. You will have to connect your system up to the power analyzer. You will need to use the provided tools and set up your configuration file to tell it how to use power. Another place to look for help in how to set up your configuration file is in published SPEC ACCEL results at www.spec.org.

Power.04 q. What kind of power analyzer do I need?

a. Please see the list of accepted power analyzers. The list contains those power analyzers that can be used for making reportable runs.

Power.05 q. Is it possible to retrieve all of the sample data collected from the power analyzer?

a. Power data is sampled at 1 second intervals. This data is stored in the created raw file (.rsf) that is created after a run finishes. As an example, to extract the data, you would run

The output will go to the screen.

The general format of the command is

Where the section will look something like

where <benchmark> is the full benchmark name, <iter> is the iteration number (3 digits).

Power.06 q. What settings are required for the power analyzer?

a. For instructions how to setup the power analyzers and to run the SPEC PTDaemon please consult the SPECpower Measurement Setup Guide which you can find in the PTDaemon tree.

Power.07 q. Can I use autoranging?

a. In order to minimize measurement errors, the autoranging feature should not be used. If the measurement range is too great for your power analyzer for a benchmark run, you can set your measurement rang on a per-benchmark basis. Please check the config.html document for more on how to set the per-benchmark controls.

Power.08 q. The runspec command caused uncertainty errors, what can I do?

a. There could be lots of answers. Are the values being returned during the run within the measurement capabilities that have been set for your power analyzer? Maybe you need to set the range on a per benchmark basis.

If that isn't the problem, you probably need to see if PTDaemon is reporting any errors. If you did not enable PTDaemon with logging, consider restarting it and adding the -l file1 and -d file2 options to capture more information. These may lead you to finding the answer.

Power.09 q. Can I use more than one power analyzer?

a. Yes, it is possible to use more than one power analyzer. Just provide the information in the configuration file to tell the benchmarks which connections to check for the power analyzers.

Temperature

Temperature.01 q. I got an error about it being too cold, what can I do?

ERROR: Minimum temperature during the run (19.5 degC) is less than the minimum a llowed (20 degC)

a. The benchmark requires that the minimum inflow temperature be at 20 degC or higher. This value was chosen to prevent artificially good result by trying to influence the performance of the machine by keeping it colder than normal. To get a submission quality result, you will need to find a warmer spot in your data center to do your runs.

 


Copyright 2014-2017 Standard Performance Evaluation Corporation

All Rights Reserved