Skip navigation

Standard Performance Evaluation Corporation

Facebook logo LinkedIn logo Twitter logo

High Performance Steering Committee: Annual Report

Siamak Hassanzadeh,
Sun Microsystems, Inc.
Mountain View, Calif.

Published March, 1995; see disclaimer.


The High Performance Steering Committee (HPSC) celebrated its first anniversary at the Standard Performance Evaluation Corp. (SPEC) annual business meeting in January of 1995. (See the article in SPEC Newsletter, September 1994, for the background on the creation of HPSC.)

As HPSC's mission is to establish, maintain, and endorse a suite of high-performance computing (HPC) benchmarks, most of its activity has thus far revolved around code selection and agreeing to a set of run rules for standardized, cross-platform performance comparison on a level playing field.

In the following sections the candidate codes currently under consideration and the run rules currently under discussion are briefly described.

Code Selection

As HPSC's goal is to select benchmarks representative of real world applications, several candidate codes from various HPC fields such as computational chemistry, computational fluid dynamics, computational mechanics, and seismic data processing have been considered.

For the initial HPSC release, it has been agreed to focus on two candidate codes representing two major HPC application areas: a computational chemistry code representative of pharmaceutical industry, and a seismic data processing code, representative of petroleum industry.

GAMESSComputational Chemistry

GAMESS (the General Atomic and Molecular Electronic Structure System) was formed from various programs at the Department of Energy's National Resource for Computations in Chemistry in the late 1970s. The code has been improved extensively to include many new capabilities and currently several hundred copies of the code are in use around the world.

GAMESS is representative of a class of computational chemistry programs known as "ab initio quantum chemistry". Therefore many of its functions are duplicated in other generally available commercial and public domain packages used in pharmaceutical and chemical industry for drug design and analyzing bonding in various chemical and metal compounds, among other applications.

GAMESS is a portable and scalable code and it can be run on nearly any computer from a modern desktop workstation to vector supercomputer and massively parallel processing systems. HPSC members have generally reported that they have ported and tested GAMESS on various platforms.

ARCO Seismic Benchmark

ARCO Seismic Benchmark is designed to be a portable, scalable, parallel environment for benchmarking seismic applications. Seismic methods are the primary tool in the search for oil and gas. These methods are used to produce an accurate image of the earth's subsurface in order that petroleum explorationists may infer the geological structure with potential for hydrocarbon deposits. The two distinguishing attributes of seismic data processing are the size of the data and the computational complexity. A typical 3-D seismic survey generally yields tera bytes of data and its computational requirements exceed 10^18 floating-point operations. ARCO Seismic Benchmark contains algorithms that are representative of the production software systems and is based on a processing model similar to that used in most modern seismic processing environments.

Thus far, all HPSC members have reported that they have ported and tested this code on their respective systems and they have obtained preliminary results for various input sizes and system configurations. The preliminary timing results range from a few seconds for the small input size on a modern desktop workstation, to over 24 hours for the huge input size on a vector supercomputer with peak performance in excess of 2 GFLOPS.

Run Rules

Agreeing to a set of rules for the programs within the HPSC benchmark suite will be a much more difficult task than has been the case for the existing SPEC/OSSC benchmarks. This fact is a consequence of the lack of agreed standards for parallel systems and the non-uniformity of the HPC platforms. The situation is complicated by the fact that fundamentally different implementations of benchmarks (scalar vs. vector, shared-memory vs. message passing, etc.) may be required for different systems. What follows is an attempt to list options that determine what constitutes acceptable run and reporting rules for the SPEC/HPSC benchmark suite.

Acceptable Code

HPSC has thus far adopted resolutions that require candidate benchmarks codes to be written in FORTRAN77 or C, and be available in a serial and a message passing version. The message passing version of a candidate benchmark code must use PVM (Parallel Virtual Machine) or MPI (Message Passing Interface) libraries.

For purpose of releasing SPEC/HPSC first benchmark suite, it is generally agreed to continue with the above guidelines. Other languages and programming paradigms, such as HPF (High Performance Fortran) or other SAS (shared address space) languages, will be evaluated by the HPSC steering committee for possible inclusion in subsequent releases.

Compiler Command Line Options

These are referred to as switches or flags used to compile benchmark codes and to create executables. Any standard flag is allowed, if it is supported by the system manufacturer. This includes porting, optimization, and preprocessor invocation flags.

It is generally agreed that two sets of compile line options per language per code per report will be allowed on vendor discretion.

Source Program Directives (Pragmas)

HPSC acknowledges the non-uniformity of HPC systems and that embedded source program directives are common industry practice for HPC systems to reach acceptable performance levels. HPSC has therefore made provisions for directives to be applied to the benchmark codes.

Examples of acceptable directives include loop level vectorization enabling directives on a vector processor, and loop parallelism directives for a parallel processor system. The goal is to provide a benchmark code which exploits the key architectural features of a system, but not necessarily optimally.

HPSC therefore will allow embedded source program directives at predefined locations in the released code(s). The directives will each have predefined purpose. The directive contexts will be agreed by HPSC on a code by code basis.

There also may be a substantial performance factor to be gained by "tuning" a code beyond the baseline-scalable level. Examples of fine tuning would include program rewriting or equivalent directive insertion, to interchange or block loops, dynamically reallocate data, or run more or fewer loops in parallel. Insertion of such directives in the benchmark codes is currently under discussion.

Source Code Modifications

HPSC does not generally endorse ad hoc, benchmark specific substitutions or modifications of a code. However, certain modifications resulting from the use of libraries will be allowed. Libraries may be used in the execution of the HPSC benchmarks subject to the following rules:

  • Libraries may include vendor supported libraries, third party product libraries, and public domain libraries and must be generally available to vendor customers no later than 6 months from results submission;
  • Calls to the supplied subroutines have identical syntax (i.e., identical argument lists) to the benchmark code, however, the subroutine names need not be identical to those in the benchmark code;
  • Calls to the supplied subroutines have different syntax (i.e., different argument lists) to the benchmark code, however, they provide the same function (a single identifiable task);
  • Any other use of libraries, including but not limited to, substitution or modification of a part of a benchmark code by a library call will be reviewed and is subject to approval by the HPSC.

Reporting of Results

HPSC members are not obligated to report SPEC HPSC benchmark results.

Results which are reported will be done in a manner to be defined, that satisfies the run rules and fits into a 1 or 2 page format per application area or for the whole set of codes. It would be a service to the whole community to use standard HPSC statistics in a standard format, as well as additional results that go beyond the HPSC rules. This could allow new code forms to evolve and run-rule innovations to arise. This must be done without compromising the standard SPEC reporting format or potentially confusing readers.

HPSC has already adopted a resolution requiring reporting a single result per benchmark code. The following metrics have been suggested to be included in such a report:

  • Elapsed wallclock time (in seconds) per data size, per benchmark, per system configuration.
  • Capacity rate measurement per benchmark area.


This article is based on inputs from all HPSC members. I gratefully acknowledge the contributions from David Kuck, Fiona Sim and Phil Tannenbaum.


S. Hassanzadeh, "SPEC's High Performance Steering Committee: A Progress Report," SPEC Newsletter, Vol. 6, Issue 3, September 1994.

M.W. Schmidt, K.K. Baldridge, J.A. Boatz, S.T. Elber, M.S. Gordon, J.H. Jensen, S. Koseki, N. Matsunaga, K.A. Nguyen, S. Su, T.L. Windus, M. Dupuis, and J.A. Montgomery, "The General Atomic and Molecular Electronic Structure Systems," Department of Chemistry, Iowa State University, Ames, IA, 1992.

C.C. Mosher and S. Hassanzadeh, "ARCO Seismic Processing Performance Evaluation Suite," Users' Guide, 1993.

A. Geist, A. Beguelin, J. Dongarra, W. Jiang, R. Manchek, V. Sunderam, PVM, Parallel Virtual Machine, The MIT Press, 1994.

W. Gropp, E. Lusk, A. Skjellum, Using MPI, Portable Parallel Programming with the Message-Passing Interface, The MIT Press, 1994.

C.H. Koelbel, D.B. Loveman, R.S. Schreiber, G.L. Steele Jr., and M.E. Zosel, The High Performance Fortran Handbook, The MIT Press, 1993.

Dr. Siamak Hassanzadeh is with Sun Microsystems, Inc., in Palo Alto, Calif. He serves as the chair of SPEC HPSC. HPSC information is available through: Siamak Hassanzadeh by E-mail, or 415-336-0118; or Fiona Sim, HPSC Vice-chair, E-mail, fsim@vnet.IBM.COM or 914-432-7960.

Copyright> (c) 1995 Standard Performance Evaluation Corporation