Skip navigation

Standard Performance Evaluation Corporation

Facebook logo LinkedIn logo Twitter logo
 

SPEC Cloud® IaaS 2018

The SPEC Cloud® IaaS 2018 benchmark is SPEC's second benchmark suite to measure cloud performance. The benchmark suite's use is targeted at cloud providers, cloud consumers, hardware vendors, virtualization software vendors, application software vendors, and academic researchers.

The SPEC Cloud® IaaS 2018 benchmark addresses the performance of infrastructure-as-a-service (IaaS) cloud platforms. IaaS cloud platforms can either be public or private.

The SPEC OSG Cloud IaaS 2018 benchmark price is $2000 for new customers and $500 for qualified non profit organizations and accredited academic institutions. To find out if your organization has an existing license for a SPEC product please contact SPEC at info@spec.org.

The current version of the benchmark is version 1.1, released on December 12, 2019. This update includes bug fixes and usability improvements. SPEC Cloud IaaS 2018 builds on the original 2016 release with a variety of enhancements and new primary metrics.
Please note that due to the workload and the methodology changes for metric calculations, the results of the SPEC Cloud IaaS 2018 benchmark are not comparable to those from the SPEC Cloud IaaS 2016 benchmark.

Beginning in March 2020, all result submissions must be made using version 1.1.

This new release includes usability improvements to make it easier to set up the cbtool harness and run an initial simulated test run and generate an example FDR report. Cbtool has made it easier to setup workload images on the tester's cloud platform. New and updated adapters are included in the release allowing users to test a variety of public and private cloud platforms. Depending on the cloud platform, instances may utilize a physical machine, a virtual machine or a container.

The benchmark is designed to stress provisioning as well as runtime aspects of a cloud using I/O and CPU intensive cloud computing workloads. SPEC selected the social media NoSQL database transaction and K-Means clustering using map/reduce as two significant and representative workload types within cloud computing

Each workload runs in multiple instances, referred to as an application instance. The benchmark instantiates multiple application instances during a run. The application instances and the load they generate stress the provisioning as well as run-time aspects of a cloud. The run-time aspects include CPU, memory, disk I/O, and network I/O of these instances running in a cloud. The benchmark runs the workloads until quality of service (QoS) conditions are reached. The tester can also limit the maximum number of application instances that are instantiated during a run.

The key benchmark metrics are:

  • Replicated Application Instances reports the total number of valid AIs that have completed at least one application iteration at the point the test ends. The total copies reported is the sum of the Valid AIs for each workload (KMeans and YCSB) where the number of Valid AIs for either workload cannot exceed 60% of the total. The other primary metrics are calculated based on conditions when this number of valid AIs is achieved

  • Performance Score aggregates the workload scores for all valid AIs to represent the total work done at the reported number of Replicated Application Instances. It is the sum of the KMeans and YCSB workload performance scores normalized using the reference platform. The reference platform values used are a composite of baseline metrics from several different white-box and black- box clouds. Since the Performance Score is normalized, it is a unit-less metric.

  • Relative Scalability measures whether the work performed by application instances scales linearly in a cloud. In a perfect cloud, when multiple AIs run concurrently each AI offer nearly the same level of performance as that measured for an AI running similar work during the baseline phase when the tester introduces no other load. Relative Scalability is expressed as a percentage (out of 100).

  • Mean Instance Provisioning Time averages the provisioning time for instances from all valid application instances. Each instance provisioning time measurement is the time from the initial instance provisioning request to connectivity on port 22 (ssh).

For more detail on the SPEC Cloud IaaS 2018 benchmark please review the benchmark documentation listed below.

Results

Submitted Results
Includes all of the results submitted to SPEC from the SPEC member companies and other licensees of the benchmark.

Press Releases

Press release material, documents, and announcements:

Benchmark Documentation

Benchmark Technical Support