SPEC ACCEL: Read Me First

(To check for possible updates to this document, please see http://www.spec.org/accel/Docs/ )

Introduction

This document provides background information about the SPEC ACCEL benchmark suite. SPEC hopes that this material will help you understand what the benchmark suite can, and cannot, provide; and that it will help you make efficient use of the product.

Overall, SPEC designed SPEC ACCEL to provide a comparative measure the performance of hardware Accelerator devices and their supporting software tool chains using computationally-intensive parallel applications. The suite is comprised of scientific applications used in High Performance Computing (HPC). The suite has been ported using several Accelerator programming models each of which will be released as separate benchmark components. The product consists of source code benchmarks that are developed from real user applications.

This document is organized as a series of questions and answers.

Background

Q1. What is SPEC?

Q2. What is a benchmark?

Q3. Should I benchmark my own application?

Q4. If not my own application, then what?

Scope

Q5. What does SPEC ACCEL measure?

Q6. Why use SPEC ACCEL?

Q7. What are the limitations of SPEC ACCEL?

Overview of usage

Q8. What is included in the SPEC ACCEL package?

Q9. What does the user of the SPEC ACCEL suite have to provide?

Q10. What are the basic steps in running the benchmarks?

Q11. What source code is provided? What exactly makes up these suites?

Metrics

Q12. Some of the benchmark names sound familiar; are these comparable to other programs?

Q13. What metrics can be measured?

Q14. What is the difference between a "base" metric and a "peak" metric?

Q15. What is the power metric?

Q16. Which SPEC ACCEL metric should be used to compare performance?

Benchmark selection

Q17. What criteria were used to select the benchmarks?

Miscellaneous

Q18. Why does SPEC use a reference machine? What machine is used for SPEC ACCEL?

Q19. How long does it take to run the SPEC ACCEL benchmark suites?

Q20. What if the tools cannot be run or built on a system? Can the benchmarks be run manually?

Q21. Where are SPEC ACCEL results available?

Q22. Can SPEC ACCEL results be published outside of the SPEC web site? Do the rules still apply?

Q23. How do I contact SPEC for more information or for technical support?

Q24. Now that I have read this document, what should I do next?

Note: links to SPEC ACCEL documents on this web page assume that you are reading the page from a directory that also contains the other SPEC ACCEL documents. If by some chance you are reading this web page from a location where the links do not work, try accessing the referenced documents at one of the following locations:

Q1. What is SPEC?

SPEC is the Standard Performance Evaluation Corporation. SPEC is a non-profit organization whose members include computer hardware vendors, software companies, universities, research organizations, systems integrators, publishers and consultants. SPEC's goal is to establish, maintain and endorse a standardized set of relevant benchmarks for computer systems. Although no one set of tests can fully characterize overall system performance, SPEC believes that the user community benefits from objective tests which can serve as a common reference point.

Q2. What is a benchmark?

A benchmark is "a standard of measurement or evaluation" (Webster’s II Dictionary). A computer benchmark is typically a computer program that performs a strictly defined set of operations - a workload - and returns some form of result - a metric - describing how the tested computer performed. Computer benchmark metrics usually measure speed: how fast was the workload completed; or throughput: how many workload units per unit time were completed. Running the same computer benchmark on multiple computers allows a comparison to be made.

Q3. Should I benchmark my own application?

Ideally, the best comparison test for systems would be your own application with your own workload. Unfortunately, it is often impractical to get a wide base of reliable, repeatable and comparable measurements for different systems using your own application with your own workload. Problems might include generation of a good test case, confidentiality concerns, difficulty ensuring comparable conditions, time, money, or other constraints.

Q4. If not my own application, then what?

You may wish to consider using standardized benchmarks as a reference point. Ideally, a standardized benchmark will be portable, and may already have been run on the platforms that you are interested in. However, before you consider the results you need to be sure that you understand the correlation between your application/computing needs and what the benchmark is measuring. Are the benchmarks similar to the kinds of applications you run? Do the workloads have similar characteristics? Based on your answers to these questions, you can begin to see how the benchmark may approximate your reality.

Note: A standardized benchmark can serve as reference point. Nevertheless, when you are doing vendor or product selection, SPEC does not claim that any standardized benchmark can replace benchmarking your own actual application.

Q5. What does SPEC ACCEL measure?

SPEC ACCEL focuses on the performance of highly parallel compute intensive applications using hardware acceleration, which is referred here as an "Accelerator". The software APIs used in SPEC ACCEL models the Accelerator with the assumptions that there is a CPU (host) which runs the main program, copies the data needed by the accelerated computation to and from discrete memory on the Accelerator, and launches the accelerated routines. Which means these benchmarks emphasize the performance of:

Note that some Accelerators may share the same memory as the host CPU. In these cases, the programming model still would assume separate memory and it would be up to the underlying tools, compiler and driver to optimize or reduce unnecessary data movement.

SPEC ACCEL contains a suite that focuses on parallel computing performance using the OpenCL 1.1, OpenMP 4.5, and OpenACC 1.0 standards. The suite may be extended in the future to include other standards for accelerators.

The suite can be used to measure along the following vector:

SPEC ACCEL is not intended to stress other computer components such as networking, the operating system, graphics, or the I/O system. Note that there are many other SPEC benchmarks, including benchmarks that specifically focus on graphics, distributed Java computing, webservers, and network file systems.

Q6. Why use SPEC ACCEL?

SPEC ACCEL provides a comparative measure of parallel compute performance between Accelerator platforms using OpenCL and OpenACC. If this matches with the type of workloads you are interested in, SPEC ACCEL provides a good reference point.

Other advantages to using SPEC ACCEL include:

Q7. What are the limitations of SPEC ACCEL?

As described above, the ideal benchmark for vendor or product selection would be your own workload on your own application. Please bear in mind that no standardized benchmark can provide a perfect model of the realities of your particular system and user community.

Some workloads require 2 GB of memory on the device and use single data objects of 1 GB. Hence, some valid OpenCL and OpenACC implimentations may not be able to run SPEC ACCEL if the available OpenCL buffer size is less than 1 GB.

Q8. What is included in the SPEC ACCEL package?

SPEC provides the following on the SPEC ACCEL distribution:

Q9. What does the user of the SPEC ACCEL suite have to provide?

Briefly, you need a Unix or Linux system. Mac OS X or Microsoft Windows may also be used but have not been thoroughly tested. You will also need compilers; 8 GB of free disk space; and a minimum of 4 GB of free host memory. The Accelerator needs at least 2 GB of memory and the OpenCL, OpenMP, and OpenACC implementations must have the ability to allocate a 1 GB data object. Also for OpenACC and OpenMP, the Accelerator will need to support both single and double precision floating point operations.

Q10. What are the basic steps in running the benchmarks?

Installation and use are covered in detail in the SPEC ACCEL User Documentation. The basic steps are:

If you wish to generate results suitable for quoting in public, you will need to carefully study and adhere to the run rules.

Q11. What source code is provided? What exactly makes up these suites?

SPEC ACCEL is based on highly parallel compute-intensive applications provided as source code. SPEC ACCEL is divided into multiple suites where each suite is comprised of codes using a single parallel programing standard API.

The SPEC ACCEL_OCL suite contains 19 OpenCL enabled benchmarks.

Benchmark Language Application domain
101.tpacf C++ Astrophysics
103.stencil C++ Thermodynamics
104.lbm C++ Fluid Dynamics
110.fft C Signal processing
112.spmv C++ Sparse Linear Algebra
114.mriq C Medicine
116.histo C Silicon Wafer Verification
117.bfs C Electronic Design Automation, Graph Traversals
118.cutcp C Molecular Dynamics
120.kmeans C++ Dense Linear Algebra, Data Mining
121.lavamd C N-Body, Molecular Dynamics
122.cfd C++ Unstructured Grid, Fluid Dynamics
123.nw C++ Dynamic Programming, Bioinformatics
124.hotspot C Structured Grid, Physics Simulation
125.lud C++ Dense Linear Algebra, Linear Algebra
126.ge C++ Dense Linear Algebra, Linear Algebra
127.srad C Structured Grid, Image Processing
128.heartwall C Structured Grid, Medical Imaging
140.bplustree C Graph Traversal, Search

The SPEC ACCEL suite contains 15 OpenACC enabled benchmarks.

Benchmark Language Application domain
303.ostencil C Thermodynamics
304.olbm C Computational Fluid Dynmaics, Lattice Boltzmann Method
314.omriq C Medicine
350.md Fortran Molecular Dynamics
351.palm Fortran Large-eddy simulation, atmospheric turbulence
352.ep C Embarrassingly Parallel
353.clvrleaf Fortran, C Explicit Hydrodynamics
354.cg C Conjuage Gradient
355.seismic Fortran Seismic Wave Modeling
356.sp Fortran Scalar Penta-diagonal solver
357.csp C Scalar Penta-diagonal solver
359.miniGhost C, Fortran Finite difference
360.ilbdc Fortran Fluid Mechanics.
363.swim Fortran Weather
370.bt C Block Tridiagonal Solver for 3D PDE

The SPEC ACCEL suite contains 15 OpenMP enabled benchmarks.

Benchmark Language Application domain
503.postencil C Thermodynamics
504.polbm C Computational Fluid Dynmaics, Lattice Boltzmann Method
514.pomriq C Medicine
550.pmd Fortran Molecular Dynamics
551.ppalm Fortran Large-eddy simulation, atmospheric turbulence
552.pep C Embarrassingly Parallel
553.pclvrleaf Fortran, C Explicit Hydrodynamics
554.pcg C Conjuage Gradient
555.pseismic Fortran Seismic Wave Modeling
556.psp Fortran Scalar Penta-diagonal solver
557.pcsp C Scalar Penta-diagonal solver
559.pmniGhost C, Fortran Finite difference
560.pilbdc Fortran Fluid Mechanics.
563.pswim Fortran Weather
570.pbt C Block Tridiagonal Solver for 3D PDE

Descriptions of the benchmarks, with reference to papers, web sites, and so forth, can be found in the individual benchmark descriptions (click the links above). Some of the benchmarks also provide additional details, such as documentation from the original program, in the nnn.benchmark/Docs directories in the SPEC benchmark tree.

The numbers used as part of the benchmark names provide an identifier to help distinguish programs from one another.

Q12. Some of the benchmark names sound familiar; are these comparable to other programs?

Many of the SPEC benchmarks have been derived from publicly available application programs. The individual benchmarks in this suite may be similar, but are NOT identical to benchmarks or programs with similar names which may be available from sources other than SPEC. In particular, SPEC has invested significant effort to improve portability and to minimize hardware dependencies, to avoid unfairly favoring one hardware platform over another. For this reason, the application programs in this distribution may perform differently from commercially available versions of the same application.

Therefore, it is not valid to compare SPEC ACCEL benchmark results with anything other than other SPEC ACCEL benchmark results.

Q13. What metrics can be measured?

After the benchmarks are run on the system under test (SUT), a ratio for each of them is calculated using the run time on the SUT and a SPEC-determined reference time. From these ratios, the following metrics are calculated:

SPEC ACCEL (for highly parallel compute intensive performance comparisons):

In all cases, a higher score means "better performance" on the given workload.

Q14. What is the difference between a "base" metric and a "peak" metric?

In order to provide comparisons across different computer hardware, SPEC provides the benchmarks as source code. Thus, in order to run the benchmarks, they must be compiled. There is agreement that the benchmarks should be compiled the way users compile programs. But how do users compile programs?

In addition to the above, a wide range of other types of usage models could also be imagined, ranging in a continuum from -Odebug at the low end, to inserting directives and/or re-writing the source code at the high end. Which points on this continuum should SPEC ACCEL allow?

SPEC recognizes that any point chosen from that continuum might seem arbitrary to those whose interests lie at a different point. Nevertheless, choices must be made.

For SPEC ACCEL, SPEC has chosen to allow two types of compilation:

Note that options allowed under the base metric rules are a subset of those allowed under the peak metric rules. A legal base result is also legal under the peak rules but a legal peak result is NOT necessarily legal under the base rules.

A full description of the distinctions and required guidelines can be found in the SPEC ACCEL Run and Reporting Rules.

Q15. What is the power metric?

With SPEC ACCEL, SPEC is providing a way to measure the power consumed during the benchmark run.

The benchmark reports list many other power measure data.

Q16. Which SPEC ACCEL metric should be used to compare performance?

It depends on your needs. SPEC provides the benchmarks and results as tools for you to use. You need to determine how you use a computer or what your performance requirements are and then choose the appropriate SPEC benchmark or metrics.

Q17: What criteria were used to select the benchmarks?

The OpenCL and OpenACC benchmarks were ported from existing benchmark suites such as Parboil, Rodinia, OMP2012, CPU2006, and NAS Parallel Benchmarks, as well as new applications from Sandia National Laboratories, Atomic Weapons Establishment (AWE), and the Leibniz University of Hannover.

The chosen benchmarks range from small applications testing specific parallel algorithms to a large weather modeling application. They represent many commonly used High Performance Computing (HPC) algorithms and designed to test various aspects of an Accelerator such as work scheduling, memory performance of coalesced data, random memory access, data movement between host and Accelerator.

Q18: Why does SPEC use a reference machine? What machine is used for SPEC ACCEL?

SPEC uses a reference machine to normalize the performance metrics used in the SPEC ACCEL suites. Each benchmark is run and measured on this machine to establish a reference time for that benchmark. These times are then used in the SPEC calculations.

SPEC ACCEL uses an "SGI C3108-TY11" (previously knows as "SGI XE500") system from 2011 as a the reference machine. The machine uses two 2.4 GHz Intel Xeon E5620 processors with 24 GB of RAM and is equipped with an NVIDIA Tesla C2070 (Fermi generation with 6 GB GPU memory). The reference measurement for both the OpenCL and OpenACC suite are generated by running a host program on the Intel CPUs and offloading work onto the NVIDIA GPU.

It took about 8 hours, 2.5 for OpenCL and 5.5 for OpenACC, to do a rule-conforming run of the base metrics on the reference machine.

Note that when comparing any two systems measured with the SPEC ACCEL, their performance relative to each other would remain the same even if a different reference machine was used. This is a consequence of the mathematics involved in calculating the individual and overall (geometric mean) metrics.

Q19: How long does it take to run the SPEC ACCEL benchmark suites?

This depends on the suite and the machine that is running the benchmarks. As mentioned above, the reference (historical) machine takes on the order of 8 hours; contemporary machines should take less. Again, though, it depends on which metrics are run. Runs without using an accelerator will take longer.

Q20: What if the tools cannot be run or built on a system? Can the benchmarks be run manually?

To generate rule-compliant results, an approved toolset must be used. If several attempts at using the SPEC-provided tools are not successful, you should contact SPEC for technical support. SPEC may be able to help you, but this is not always possible -- for example, if you are attempting to build the tools on a platform that is not available to SPEC.

If you just want to work with the benchmarks and do not care to generate publishable results, SPEC provides information about how to do so.

Q21: Where are SPEC ACCEL results available?

Results for measurements submitted to SPEC are available at http://www.spec.org/accel/.

Q22: Can SPEC ACCEL results be published outside of the SPEC web site? Do the rules still apply?

Yes, SPEC ACCEL results can be freely published if all the run and reporting rules have been followed. The SPEC ACCEL license agreement binds every purchaser of the suite to the run and reporting rules if results are quoted in public. A full disclosure of the details of a performance measurement must be provided on request.

SPEC strongly encourages that results be submitted for publication on SPEC's web site, since it ensures a peer review process and uniform presentation of all results.

The run and reporting rules for research and and academic contexts recognize that it may not be practical to comply with the full set of rules in some contexts. It is always required, however, that non-compliant results must be clearly distinguished from rule-compliant results.

Q23. How do I contact SPEC for more information or for technical support?

SPEC can be contacted in several ways. For general information, including other means of contacting SPEC, please see SPEC's Web Site at:

http://www.spec.org/

General questions can be emailed to: info@spec.org
ACCEL Technical Support Questions can be sent to: accelsupport@spec.org

Q24. Now that I have read this document, what should I do next?

If you haven't bought SPEC ACCEL, it is hoped that you will consider doing so. If you are ready to get started using the suite, then you should pick a system that meets the requirements as described in

system-requirements.html

and install the suite, following the instructions in

install-guide-unix.html or
install-guide-windows.html


Copyright 2014-2017 Standard Performance Evaluation Corporation
All Rights Reserved