The SPECvirt™ Datacenter 2021 Benchmark Result File Fields

Last updated: May 21 4, 2020

To check for possible updates to this document, please see http://www.spec.org/virt_datacenter2021/docs/SPECvirt_Datacenter2021-Result_File_Fields.html

Abstract

This document describes the various fields used in the results report file from the SPECvirt Datacenter 2021 benchmark.

Report Titles


Table of Contents

1.SPECvirt Datacenter 2021 Benchmark

2. Top Bar

2.1 Headline

2.2 Tested By

2.3 Test Date

2.4 SPEC license #

2.5 Template Version

2.6 Benchmark SDK

2.7 Performance Section links

2.8 SUT Configuration Section links

2.9 Notes Section links

3. Benchmark Results Summary

3.1 Performance table

3.2 Total Score per Cluster

3.3 Score per Host

3.4 Total VMs

3.5 Number of Hosts

3.6 Tiles in Phase 1

3.7 Tiles in Phase 3

4. Compliance Errors

4.1 QoS Issues

4.2 Validation Errors

4.3 Reporter Errors

5. Physical Configuration

5.1 Virtualization Product

5.1.1 Vendor

5.1.2 Product Name

5.2 SUT Compute Nodes

5.2.1 Server

5.2.1.1 Number of Hosts

5.2.1.2 Server Vendor

5.2.1.3 Server Model

5.2.1.4 Processor

5.2.1.5 Processor Speed (MHz)

5.2.1.6 Processor Cores

5.2.1.7 Memory

5.2.1.8 Operating System

5.2.1.9 File System

5.2.1.10 Other Hardware

5.2.1.11 Other Software

5.2.2 Availability Dates

5.2.2.1 SUT Hardware

5.2.2.2 Virt. Software

5.2.2.3 Other Components

5.2.3 Internal Network

5.2.3.1 Network Adapters

5.2.3.2 SUT Ports Total

5.2.3.3 SUT Ports Used

5.2.3.4 Network Type

5.2.3.5 Network Speed

5.2.4 Internal Storage

5.2.4.1 Storage Controllers

5.2.4.2 Storage Enclosure

5.2.4.3 Disk Description

5.2.4.4 RAID Level

5.2.4.5 UPS Required?

5.3 Management Node

5.3.1 Management Node Virtualization Vendor

5.3.2 Management Node Virtualization Product

5.3.3 Is Management Node VM?

5.3.4 Other Software

5.3.5 Management Node HW Vendor

5.3.6 Management Node HW Model

5.3.7 Processor

5.3.8 Processor Speed

5.3.9 Processor Cores

5.3.10 Memory

5.3.11 Network Adapter

5.3.12 Other Hardware

5.4 External Storage

5.4.1 Common Storage Hardware Vendor

5.4.2 Common Storage Hardware Model

5.4.3 Common Storage Classification

5.4.4 Common Storage Hardware Interconnect

5.4.5 Disk Description

5.4.6 Disk Controller

5.4.7 Disk Filesystem

5.4.8 Disk RAID

5.5 External Network

5.5.1 Common Network Hardware Interconnect

5.5.2 Interconnect Role

5.5.3 Interconnect Speed

5.6 Client Hosts

5.6.1 OS

5.6.2 # of Hosts

5.6.3 Client Hardware Vendor

5.6.4 Client Hardware Model

5.6.5 Other Hardware

5.6.6 Other Software

5.6.7 Processor

5.6.8 Processor Speed

5.6.9 Processor Cores

5.6.10 Memory

5.6.11 Network Adapter

6. Virtual Configuration

6.1 Departmental Workload VMs Configuration

6.2 HammerDB Workload VMs Configuration

6.3 BigBench Workload VMs Configuration

7. Notes

7.1 Hardware Notes

7.1.1 Compute Node

7.1.2 Storage

7.1.3 Network

7.1.4 Management Node

7.2 Software Notes

7.2.1 Compute Node

7.2.2 Management Node

7.3 Client Driver Notes

7.4 Other Notes

8. Performance Details

8.1 SUT VM migrations

8.2 Tilex details

8.2.1 Mail Workload

8.2.1.1 Run Time (s)

8.2.1.2 Total Txns

8.2.1.3 Txns/sec

8.2.1.4 Avg. Response (ms)

8.2.1.5 90th Response

8.2.1.6 95th Response

8.2.2 Web Workload

8.2.2.1 Run Time (s)

8.2.2.2 Total Txns

8.2.2.3 Txns/sec

8.2.2.4 Avg. Response (ms)

8.2.2.5 90th Response

8.2.2.6 95th Response

8.2.3 Collab Workload

8.2.3.1 Run Time (s)

8.2.3.2 Total Txns

8.2.3.3 Txns/sec

8.2.3.4 Avg. Response (ms)

8.2.3.5 90th Response

8.2.3.6 95th Response

8.2.4 HammerDB Workload

8.2.4.1 Run Time (s)

8.2.4.2 Total New Order Txns

8.2.4.3 New Order Txns/sec

8.2.5 BigBench Workload

8.2.5.1 Run Time (s)

8.2.5.2 Total Queries

8.2.5.3 Query Cycle

8.2.5.4 Num Queries

8.2.5.5 Cycle Time (s)


1. SPECvirt Datacenter 2021 Benchmark

SPECvirt™ Datacenter 2021 is the next generation of virtualization benchmarking for measuring performance of a scaled-out datacenter. SPECvirt Datacenter 2021 is a multi-host benchmark using simulated and real-life workloads to measure the overall efficiency of virtualization solutions and their management environments. SPECvirt Datacenter 2021 differs from SPEC VIRT_SC(R) 2013 in that SPEC VIRT_SC 2013 is a single host benchmark and provides interesting host-level information and performance. However, most of today's datacenters use clusters for reliability, availability, serviceability, and security. Adding virtualization to a clustered solution enhances server optimization, flexibility, and application availability while reducing costs through server and datacenter consolidation.

SPECvirt Datacenter 2021 provides a methodical way to measure scalability and is designed to be utilized across multiple vendor platforms. The primary goal of SPECvirt Datacenter 2021 is to be a standard method for measuring a virtualization platform's ability to model a dynamic datacenter virtual environment. It models typical, modern-day usage of virtualized infrastructure, such as virtual machine (VM) resource provisioning, cross-node load balancing, and management operations such as VM migrations and power on/off. Its multi-host environment exercises datacenter operations under load. It dynamically provisions new workload Tiles by either using a VM template or powering on existing VMs, and adding hosts from the cluster to measure scheduler efficiency.

SPECvirt Datacenter 2021 uses a five-workload benchmark design:

All these workloads drive pre-defined, dynamic loads against sets of virtualized machines.


2. Top Bar

The top bar shows the measured SPECvirt Datacenter 2021 benchmark result and gives some general information regarding this test run.

2.1 Headline

The headline of the performance report includes two fields. The first field displays the System Under Test (SUT) hosts' hardware vendor and model name, along with the SUT's Hypervisor vendor and product name. In a second field the SPECvirt™Datacenter-2021 metric is printed; if the current result does not pass the validity and QoS checks implemented in the benchmark, the metric result is overwritten with a "Non-Compliant! Found xxx Issues" indicator.

2.2 Tested By

The name of the organization or individual that ran the test and submitted the result. Generally, this is the name of the license holder.

2.3 Test Date

The date when the test is run. This value is automatically supplied by the benchmark software; the time reported by the system under test is recorded in the raw result file .

2.4 SPEC license #

The SPEC license number of the organization or individual that ran the benchmark

2.5 Template Version

The version of the benchmark template used for the result. This is usually the version of the SPECvirt Datacenter 2021 benchmark.

2.6 Benchmark SDK

The hypervisor-specific set of tools used for the result, as defined in the $virtVendor field in the Control.config. The value of this field will also match a subdirectory under ${benchmark home}/config/workloads/specvirt/HV_Operations/ on the svdc-director VM where the toolset resides.

2.7 Performance Section links

This field in the Top Bar section provides links to the different performance-relevant sections of the report: Performance Summary, Performance Details, and Errors.

2.8 SUT Configuration Section links

This field in the Top Bar section provides links to the different SUT configuration sections of the report: Physical Configuration and Virtual Configuration.

2.9 Notes Section links

This field in the Top Bar section provides links to the different Notes sections of the report: Hardware Notes, Software Notes, Client Driver Notes, and Other Notes.

2.10 INVALID or WARNING or COMMENTS

Any inconsistencies with the run and reporting rules causing a failure of one of the validity checks implemented in the report generation software will be reported in the "Headline" section and the result will be stamped with an "Invalid" water mark. More detailed explanations of the issues will be reported in section "Compliance Errors ". If there are any special waivers or other comments from SPEC editor, those will also be listed here.


3 Benchmark Results Summary

This section provides a summary of the results for each tiles' workloads, and metrics for the overall benchmark.

3.1 Performance table

The performance table will report summary performance information for each tile run during the benchmark. The fields reported are:

Note that if a partial tile was used in Phase 3 then any workloads not run will have a value of "NA" reported in that workload's field for the final tile.

3.2 Total Score per Cluster

The total score per cluster is the sum of all individual "Per-Tile Score" values shown in the "Performance" table.

3.3 Score per Host

The total score per host is the "Total Score per Cluster" divided by the "Number of Hosts". This value, along with the "Number of Hosts" is used in the benchmark metric.

3.4 Total VMs

The total number of SUT VMs used in the benchmark.

3.5 Number of Hosts

The total SUT hosts used in the benchmark. This value, along with the "Score per Host" is used in the benchmark metric.

3.6 Tiles in Phase 1

The number of Tiles used in Phase 1. This value may include a decimal if a partial tile was used in Phase 1. Note that this is also the number tiles used in Phase 2.

3.7 Tiles in Phase 3

The number of Tiles used in Phase 3. This value may include a decimal if a partial tile was used in Phase 3.


4. Compliance Errors

The following section of the report file provides details on any compliance issues were found during the error checking performed by the end-of-run result and report generation. If no issues were discovered, the message "No Validation Errors Found" is displayed.

4.1 QoS

This section displays any Qos errors reported.An example of this type of error is a query cycle time for one of the BigBench workloads exceeding 3600 seconds.

4.2 Validation

This section displays any errors detected caused by unexpected changes to values of fields in the raw file. An example of this type of error is modification of "fixed parameters" made to the raw file after the benchmark is run.

4.3 Reporter Errors

This section displays any errors detected that do not comply with the requirements for a valid submission. An example of this type of error is an invalid number of SUT hosts, i.e. not a multiple of 4 hosts. Another example is a change in a fixed parameter like "runTime" prior to the start of the run.


5. Physical Configuration

The following section of the report file describes the hardware and the software of the system under test (SUT) used to run the reported benchmark with the level of detail required to reproduce this result.

5.1 Virtualization Product

The following section of the report file describes the Virtualization product used on the SUT for this result.

5.1.1 Vendor

Vendor of the virtualization product.

5.1.2 Product Name

The name of the virtualization product, including its version number.

5.2 SUT Compute Nodes

The following section of the report describes the SUT's host compute nodes.

5.2.1 Server

This subsection describes the details of the host servers.

5.2.1.1 Number of Hosts

The number of SUT hosts used for the benchmark.

5.2.1.2 Server Vendor

Server vendor of the SUT hosts.

5.2.1.3 Server Model

Model name of the servers used as SUT hosts.

5.2.1.4 Processor

Processor name used by SUT server.

5.2.1.5 Processor Speed (MHz)

Maximum speed (in MHz) for Processor used by SUT server.

5.2.1.6 Processor Cores

Number of cores, chips, cores per chip, and threads per core in each SUT server.

5.2.1.7 Memory

Total memory -- including units -- and number and type of DIMMs used in each SUT server. The recommended format for describing the types of DIMMs is described here:

DDR4 Format:

N x gg ss pheRxff PC4v-wwwwaa-m

References:

Example:

8 x 16 GB 2Rx4 PC4-2133P-R

Where:

Notes:

5.2.1.8 Operating System

Operating system installed on each SUT server.

5.2.1.9 File System

File system used by OS on each SUT server.

5.2.1.10 Other Hardware

Brief description of any other hardware installed in each SUT server.

5.2.1.11 Other Software

Brief description of any other software installed on each SUT server.

5.2.2 Availability Dates

This subsection describes the availability dates for different SUT components

5.2.2.1 SUT Hardware

Latest availability date for the SUT hardware components, in MMM-YYYY format.

5.2.2.2 Virt Software

Latest availability date for the SUT host virtualization software components, in MMM-YYYY format.

5.2.2.3 Other Components

Latest availability date for any other SUT components in MMM-YYYY format.

5.2.3 Internal Network

This subsection describes the details of the internal networking in the SUT servers.

5.2.3.1 Network Adapters

Number and name of network adapters installed in each SUT server.

5.2.3.2 SUT Ports Total

Number of network ports present in each SUT server.

5.2.3.3 SUT Ports Used

Number of network pors used in each SUT server.

5.2.3.4 Network Type

High level summary of the network type used by the network port in each SUT server.

5.2.3.5 Network Speed

Configured speed of network port in each SUT server.

5.2.4 Internal Storage

This subsection describes the details of the internal storage in the SUT servers.

5.2.4.1 Storage Controllers

Number and name of storage controllers installed in each SUT server.

5.2.4.2 Storage Enclosure

Number and name of storage controllers installed in each SUT server.

5.2.4.3 Disk Description

Number, size, and description of disks installed in each SUT server not used for common SUT storage.

5.2.4.4 RAID Level

RAID level used on disks installed in each SUT server.

5.2.4.5 UPS Required?

Is UPS required for stable storage requirment on each SUT server?

5.3 Management Node

This section describes the details of the SUT's hypervisor management server.

5.3.1 Management Node Virtualization Vendor

Vendor of management node application.

5.3.2 Management Node Virtualization Product

Name and version of management node application.

5.3.3 Is Management Node VM?

Is the management node a VM, vs. a physical server?

5.3.4 Other Software

Other software running on management node

5.3.5 Management Node HW Vendor

Vendor of server running management node application. If management node is a VM, use "N/A".

5.3.6 Management Node HW Model

Model of server running management node application. if management node is a VM, use "N/A".

5.3.7 Processor

Processor used in management node. If management node is a VM, use processor name used by server hosting VM.

5.3.8 Processor Speed

Maximum speed of processor used in management node, in MHz. If management node is a VM, use speed of processor used by server hosting VM.

5.3.9 Processor Cores

Total number of processor cores in management node. If management node is a VM, use number of vCPUs configured for VM.

5.3.10 Memory

Total amount of memory in management node. If management node is a VM, use amount of memory assigned to VM.

5.3.11 Network Adapter

Number and type of network adapters in management node. If management node is a VM, use number and type of vNICs used for VM.

5.3.12 Other Hardware

Other hardware use in management node. If Management node is a VM, describe additional non-default virtual hardware configured for VM.

5.4 External Storage

This section describes the details of the SUT's common storage used for hosting the workload VMs.

5.4.1 Common Storage Hardware Vendor

Vendor of hardware used for SUT's common storage. For example, name of vendor for external fibre channel enclosure.

5.4.2 Common Storage Hardware Model

Product name and model of hardware used for SUT's common storage.

5.4.3 Common Storage Classification

Classification of SUT's common storage. For example, "16Gb fibre channel SAN" or "VMware vSAN".

5.4.4 Common Storage Hardware Interconnect

Product name and description of storage hardware used to connect SUT common storage to SUT hosts.

5.4.5 Disk Description

Number and description of disks used for SUT's common storage.

5.4.6 Disk Controller

Number and description of external disk controllers used for SUT's common storage.

5.4.7 Disk Filesystem

Filesystem used for SUT's common storage.

5.4.8 Disk RAID

RAID level used by disks configured for SUT's common storage.

5.5 External Network

This section describes the details of the SUT's external networking.

5.5.1 Common Network Hardware Interconnect

Product name and description of network hardware used to connect SUT workload VMs and client drivers.

5.5.2 Interconnect Role

Role of network interconnect. For example, "hypervisor management" or "SUT network communication"

5.5.3 Interconnect Speed

Configured speed of SUT interconnect.

5.6 Client Hosts

This section describes the details of the systems hosting the client driver VMs.

5.6.1 OS

Operating system running on client hosts.

5.6.2 # of Hosts

Total number of host systems used for client drivers.

5.6.3 Client Hardware Vendor

Hardware vendor of client host systems.

5.6.4 Client Hardware Model

Hardware model name of client host systems.

5.6.5 Other Hardware

Additional hardware installed in client host systems.

5.6.6 Other Software

Additional software installed on client host systems.

5.6.7 Processor

Name of Processor used in client host systems.

5.6.8 Processor Speed

Maximum speed of processors used in client host systems.

5.6.9 Processor Cores

Total number of cores present in client host systems.

5.6.10 Memory

Total amount of memory and DIMM type present in client host systems.

5.6.11 Network Adapter

Number and name of network adapters present in client host systems.


6. Virtual Configuration

The following section of the report file describes the configuration of the workload VMs. For each workload VM type, the following items are reported:

6.1 Departmental Workload VMs Configuration

This subsection describes configuration for the departmental workload VMs.

6.2 HammerDB Workload VMs Configuration

This subsection describes configuration for the HammerDB workload VMs.

6.3 BigBench Workload VMs Configuration

This subsection describes configuration for the BigBench workload VMs.


7. Notes

The following section of the report file contains detailed notes on performance tuning made on the SUT not captured in the previous sections.

It is required to report whether certain security vulnerabilities have been mitigated in the SUT (HW and/or OS). As of this writing, the disclosure takes the bulleted form below (choosing either "Yes" or " No" in response to the statement that follows it). See the public version of this document for updates.

These statements should be reported at the top of the Notes section. Use the NOTES[] fields in Testbed.config to populate this subsection.

7.1 Hardware Notes

This subsection provides detailed notes for the hardware configuration.

7.1.1 Compute Node

Notes on configuration details of the SUT's compute nodes. Use the HOST.HW.NOTES[] fields in Testbed.config to populate this subsection.

7.1.2 Storage

Notes on configuration details of the SUT's shared storage. Use the STORAGE.NOTES[] fields in Testbed.config to populate this subsection.

7.1.3 Network

Notes on configuration details of the SUT's networking . Use the NETWORK.NOTES[] fields in Testbed.config to populate this subsection.

7.1.4 Management Node

Notes on configuration details of the SUT's management node. Use the MANAGER.HW.NOTES[] fields in Testbed.config to populate this subsection.

7.2 Software Notes

This subsection provides detailed notes for the software configuration.

7.2.1 Compute Node

Notes on configuration details of the SUT's compute nodes. Use the HOST.SW.NOTES[] fields in Testbed.config to populate this subsection.

7.2.2 Management Node

Notes on configuration details of the SUT's management node. Use the MANAGER.SW.NOTES[] fields in Testbed.config to populate this subsection.

7.3 Client Driver Notes

This subsection provides detailed notes for the client driver configuration. Use the CLIENT.NOTES[] fields in Testbed.config to populate this subsection.

7.4 Other Notes

This subsection provides detailed notes any performance-relevant optimizations not covered in the other sections. Use the OTHER.NOTES[] fields in Testbed.config to populate this section.


8. Performance Details

The following section reports detailed performance information about the result. This information includes throughput, rate, and QoS metrics each tiles' workloads. The number of VM migrations made during the benchmark is also reported.

8.1 SUT VM Migrations

This section shows the total number of VM migrations made on the SUT during the benchmark's measurement interval.

8.2 Tile [x] details

This section reports the detailed performance metrics for tile #[x]. There will be an associated "Tile [x]" section for each tile used in the result.

Note that for the three departmental workloads, the information reported is of the same form and includes the following fields:

8.2.1 Mail Workload

This subsection describes detailed performance statistics for this tile's mailserver workload.

8.2.2 Web Workload

This subsection describes detailed performance statistics for this tile's webserver workload.

8.2.3 Collab Workload

This subsection describes detailed performance statistics for this tile's collaboration server workload. Note that the statistics for each collaboration server VM is reported separately.

8.2.4 HammerDB Workload

This subsection describes detailed performance statistics for this tile's HammerDB workload.

8.2.4.1 Run Time (s)

Total time the HammerDB workload ran during measurement interval.

8.2.4.2 Total New Order Txns

Total number of New Order (NO) transactions completed by the HammerDB workload ran during measurement interval.

8.2.4.3 New Order Txns/sec

Average rate of NO transactions achieved by the HammerDB workload ran during measurement interval.

8.2.5 BigBench Workload

This subsection describes detailed performance statistics for this tile's HammerDB workload.

8.2.5.1 Run Time (s)

Total time the BigBench workload ran during measurement interval.

8.2.5.2 Total Queries

Total number of queries completed by the BigBench workload during measurement interval.

8.2.5.3 Query Cycle

Iteration number of the set of 11 queries run by the BigBench workload during measurement interval. Once all 11 queries are completed, another cycle will be started, up to a maximum of 6 cycles.

8.2.5.4 Num Queries

Number of BigBench queries completed for this query cycle.

8.2.5.5 Cycle Time (s)

Total duration of queries completed during this query cycle.


Product and service names mentioned herein may be the trademarks of their respective owners.
Copyright 2021 Standard Performance Evaluation Corporation (SPEC).
All Rights Reserved.