(To check for possible updates to this document, please see http://www.spec.org/accel/Docs/ )
Contents
I. Hardware and Software Requirements
A. System/OS
1. About Linux Distributions
2. SPEC does not recommend use of Windows/Unix compatibility products with SPEC ACCEL
B. Memory
C. Disk Space
D. Compiler, or precompiled binaries
II. Portablility Notes
III. About Resources and Mysterious Failures
Note: links to SPEC ACCEL documents on this web page assume that you are reading the page from a directory that also contains the other SPEC ACCEL documents. If by some chance you are reading this web page from a location where the links do not work, try accessing the referenced documents at one of the following locations:
To run and install SPEC ACCEL, the following are required.
You will need a computer system running UNIX, Microsoft Windows, or Mac OS X. The benchmark suite includes a toolset. Pre-compiled versions of the toolset are provided that are expected to work with:
Please ensure that you meet the minimum required version prior to installing SPEC ACCEL V1.2.
For systems not listed in above, such as earlier or later versions of the above systems, you may find that the tools also work, but SPEC has not tested them. Windows systems that are not based on NT, such as Windows 95, Windows 98, and Windows ME, will definitely NOT work. Please see the Portability Notes below.
Even though tools are provided for Solaris, AIX, Windows and MacOS, the benchmark is unsupported for these systems because there has been no testing done with them. The toolsets are provided as a courtesy in case you want to see if the benchmark does work on these OS.
Caution:
More importantly, some of the benchmarks themselves have not been ported for the environments represented by the unsupported toolsets:
Over time, various mechanisms have evolved on Linux, including libraries, 32-bit/64-bit support, executable format, linking, and run-time loading. These mechanisms have sometimes forked with Linux distributions and then occasionally rejoined later. SPEC ACCEL has been tested with a variety of Linux distributions, but the possibility remains that you may encounter incompatibilities if you are not using *exactly* the same version as was used when the tools were built. Therefore, the table that follows tells you exactly what was used.
If you find that you are unable to install the pre-compiled SPEC ACCEL on Linux, and you would like to build the tools yourself, please see the notes in tools-build.html. SPEC may be able to provide advice for your build, but SPEC does not promise that you will succeed. Please see the limitations described in techsuport.html.
Toolset name | Expected compatibility | Build environment |
---|---|---|
linux-apm-arm64 | 64-bit ARM-based systems | Built on APM Linux with GCC 4.8.1 (APM-6.0.4). |
linux-debian6-armv6 | ARMv6 and ARMv7 systems | Built with GCC v4.5.5 on a Raspberry Pi running Debian Linux v6.0.4. |
linux-redhat72-ia32 | x86, IA-64, EM64T, and AMD64-based Linux systems with GLIBC 2.2.4+. | Built on RedHat 7.2 (x86) with gcc 3.1.1 |
linux-rhas4r4-ia64 | IA64 systems running Red Hat Enterprise Linux 4 or later. | Built on RHAS 4r4 with GCC 3.4.6 20060404 (Red Hat 3.4.6-3) |
linux-suse10-amd64 | 64-bit AMD64/EM64T Linux systems running SuSE Linux 10 or later, and other compatible Linux distributions, including some versions of RedHat Enterprise Linux and Oracle Linux Server. | Built on SuSE Linux 10 with GCC v4.1.0 (SUSE Linux) |
linux-suse10-ppc64 | 64-bit PowerPC-based Linux systems with GLIBC 2.4+. | Built on SLES10.4 with gcc 4.1.2 (SUSE 10.4:4.1.2 20070115) |
linux-ubuntu12-ppc | 32-bit PowerPC-based Linux systems | Built with GCC 4.6.3-1ubuntu5 on a Mac Mini running Ubuntu v12.04.4 LTS |
linux-ubuntu12_10-armv7hf | ARMv7 instruction set Linux based systems | Built on Ubuntu 12.10 |
linux-ubuntu14_04-ppc64le | 64-bit PowerPC Little Endian Linux with GLIBC 2.19 | Built on Ubuntu 14.04 with gcc 4.8.2 (3.13.0-30-generic) |
SPEC does not recommend installation of SPEC ACCEL on Microsoft Windows under Windows/Unix compatibility environments (such as Cygwin, MinGW, MKS, SFU, and so forth). The tools and benchmarks have not been ported to such environments. Please install from an ordinary command window (formerly known as an MS-DOS window).
If you have a Windows/Unix compatibility product on your Windows computer, SPEC recommends that you remove it from your %PATH% prior to installing or using SPEC ACCEL. The reason for this recommendation is that providing a Unix-like environment on Windows poses difficult problems. Historically there have been various approaches, with differing (incompatible) assumptions about how to mask or bridge differences between Windows and Unix. The SPEC CPU toolset has its own approach and its own set of assumptions, and there have been reports of difficult-to-diagnose errors when a Windows/Unix compatibility product is present on the path. If such problems occur, your first step should be to simplify the path, removing the compatibility product.
Typically 4 GB of host memory and 2 GB of device memory will be required, exclusive of OS/overhead; but more may be required:
The SPEC ACCEL benchmarks (code + workload) have been designed to fit within about 2 GB of Accelerator and 4 GB of host physical memory.
The memory for the benchmarks does not include space needed for the operating system, accelerator overhead and other non-SPEC tasks on the system under test.
Warning: When an operating system runs out of memory, errors may occur that are difficult to diagnose. See the section on resources, below.
Typically you will need at least 9 GB of disk space to install and run the suite. However, space needs can vary greatly depending upon your usage and system. The 9 GB estimate is based on the following:
Minimum requirement: It is possible to run with about 4GB of disk space if: you delete the build directories after the build is done; and you clean run directories between tests. See the discussion of disk space in runspec.html for more information about managing disk space.
Since SPEC supplies only source code for the benchmarks, you will need either:
--or--
SPEC ACCEL presumes that the OpenCL implementation can allocate a single data object of at least 1 GB. This may cause problems with OpenCL 1.0 compliant implementations which use buffers smaller than 1 GB. This requirement may also affect the OpenACC benchmarks when OpenCL is used as the underlying interface to the accelerator.
The SPEC ACCEL OpenACC benchmarks use a mixture of single and double precision floating point numbers. The target accelerator must support double precision floating point in order to run the OpenACC suite.
Three benchmarks, 112.spmv, 120.kmeans, 370.bt, and 570.pbt, require a large stack size on the host.
SPEC ACCEL is a source code benchmark, and portability of that source code is one of the chief goals of SPEC ACCEL. SPEC has invested substantial effort to make the benchmarks portable across a wide variety of hardware architectures, operating systems, and compilers.
Despite SPEC's testing efforts, certain portability problems are likely to arise from time to time. For example:
Some platforms may not have a Fortran-95 compiler available.
Some older accelerators may not include all the features needed to run the entire suite.
Sometimes, a new release of a compiler, driver, operating system may introduce new behavior that is incompatible with one of the benchmarks.
If you visit http://www.spec.org/accel/ and look up results for SPEC ACCEL, you will find combinations of OS and compiler versions that are known to work. For example, if a vendor reports a SPEC ACCEL result on the SuperHero 4 using SuperHero Unix V4.0 with SuperHero C V4.0 and SuperHero C++ V4.0, you may take that as an assertion by the vendor that the listed versions of Unix, C, and C++ will successfully compile and run the SPEC ACCEL suite on the listed machine.
For systems that have not (yet) been reported by vendors, SPEC can provide limited technical support to resolve portability issues. See techsupport.html for information.
Resource Demand: The SPEC ACCEL benchmarks place a significant load on your system.
As described above, the nominal memory footprint is just under 4 GB on the host. Depending on your operating system architecture, the memory may be of various types, including:
(*)You don't want to actually use your pagefile much, as that is a recipe for testing your disk instead of testing your CPU. Nevertheless, it is not uncommon for operating systems to require that pagefile space be "reserved", and benchmarks may fail if reservations are unavailable.
Mysterious failures: If an OS is unable to satisfy a resource request while your benchmarks are running, you may encounter difficult-to-diagnose, hard-to-reproduce error messages. Processes may be killed by the OS on a (seemingly) random basis, or may fail to start. If the OS is feeling sufficiently stressed, error messages may be cryptic or even non-existent. You might, or you might not, be able to find additional detail about the resource shortages in system locations such as event logs, message logs, or console logs.
Resource Competition: Meanwhile, contemporary systems run many tasks other than the benchmarks. Personal systems commonly include processes that support the user, which the user may not be aware of:
All of the above may affect observed performance, and if you are unlucky, may cause hard-to-reproduce resource shortages that prevent you from completing benchmark runs. Therefore, you may want to consider reviewing the controls for services such as the above, and you may want to reduce the load from these services during the benchmark run. When you consider adjusting services, please observe these CAUTIONS:
CAUTION 1 SPEC does not endorse any particular solution to the resource problems discussed in this section. You need to make your own decision as to what services and programs are important. If you turn off something essential, and your system turns into a mushroom, it is not SPEC's fault. Use good judgment about what you choose to disable.
CAUTION 2 If you choose to report results in public, you must run your system in a manner that is documented and supported. See the run rules for details.
In addition, each of these techniques may improve the probability that the benchmarks have a fresh set of dedicated resources:
Copyright 2014-2017 Standard Performance Evaluation Corporation
All Rights Reserved