To check for possible updates to this document, please see http://www.spec.org/virt_datacenter2021/docs/SPECvirt_Datacenter2021-Result_File_Fields.html
This document describes the various fields used in the results report file from the SPECvirt Datacenter 2021 benchmark.
1.SPECvirt Datacenter 2021 Benchmark
2.8 SUT Configuration Section links
5.3.1 Management Node Virtualization Vendor
5.3.2 Management Node Virtualization Product
5.3.5 Management Node HW Vendor
5.3.6 Management Node HW Model
5.4.1 Common Storage Hardware Vendor
5.4.2 Common Storage Hardware Model
5.4.3 Common Storage Classification
5.4.4 Common Storage Hardware Interconnect
5.5.1 Common Network Hardware Interconnect
6.1 Departmental Workload VMs Configuration
6.2 HammerDB Workload VMs Configuration
6.3 BigBench Workload VMs Configuration
SPECvirt™ Datacenter 2021 is the next generation of virtualization benchmarking for measuring performance of a scaled-out datacenter. SPECvirt Datacenter 2021 is a multi-host benchmark using simulated and real-life workloads to measure the overall efficiency of virtualization solutions and their management environments. SPECvirt Datacenter 2021 differs from SPEC VIRT_SC(R) 2013 in that SPEC VIRT_SC 2013 is a single host benchmark and provides interesting host-level information and performance. However, most of today's datacenters use clusters for reliability, availability, serviceability, and security. Adding virtualization to a clustered solution enhances server optimization, flexibility, and application availability while reducing costs through server and datacenter consolidation.
SPECvirt Datacenter 2021 provides a methodical way to measure scalability and is designed to be utilized across multiple vendor platforms. The primary goal of SPECvirt Datacenter 2021 is to be a standard method for measuring a virtualization platform's ability to model a dynamic datacenter virtual environment. It models typical, modern-day usage of virtualized infrastructure, such as virtual machine (VM) resource provisioning, cross-node load balancing, and management operations such as VM migrations and power on/off. Its multi-host environment exercises datacenter operations under load. It dynamically provisions new workload Tiles by either using a VM template or powering on existing VMs, and adding hosts from the cluster to measure scheduler efficiency.
SPECvirt Datacenter 2021 uses a five-workload benchmark design:
All these workloads drive pre-defined, dynamic loads against sets of virtualized machines.
The top bar shows the measured SPECvirt Datacenter 2021 benchmark result and gives some general information regarding this test run.
The headline of the performance report includes two fields. The first field displays the System Under Test (SUT) hosts' hardware vendor and model name, along with the SUT's Hypervisor vendor and product name. In a second field the SPECvirt™Datacenter-2021 metric is printed; if the current result does not pass the validity and QoS checks implemented in the benchmark, the metric result is overwritten with a "Non-Compliant! Found xxx Issues" indicator.
The name of the organization or individual that ran the test and submitted the result. Generally, this is the name of the license holder.
The date when the test is run. This value is automatically supplied by the benchmark software; the time reported by the system under test is recorded in the raw result file .
The SPEC license number of the organization or individual that ran the benchmark
The version of the benchmark template used for the result. This is usually the version of the SPECvirt Datacenter 2021 benchmark.
The hypervisor-specific set of tools used for the result, as defined in the $virtVendor field in the Control.config. The value of this field will also match a subdirectory under ${benchmark home}/config/workloads/specvirt/HV_Operations/ on the svdc-director VM where the toolset resides.
This field in the Top Bar section provides links to the different performance-relevant sections of the report: Performance Summary, Performance Details, and Errors.
This field in the Top Bar section provides links to the different SUT configuration sections of the report: Physical Configuration and Virtual Configuration.
This field in the Top Bar section provides links to the different Notes sections of the report: Hardware Notes, Software Notes, Client Driver Notes, and Other Notes.
Any inconsistencies with the run and reporting rules causing a failure of one of the validity checks implemented in the report generation software will be reported in the "Headline" section and the result will be stamped with an "Invalid" water mark. More detailed explanations of the issues will be reported in section "Compliance Errors ". If there are any special waivers or other comments from SPEC editor, those will also be listed here.
This section provides a summary of the results for each tiles' workloads, and metrics for the overall benchmark.
The performance table will report summary performance information for each tile run during the benchmark. The fields reported are:
Note that if a partial tile was used in Phase 3 then any workloads not run will have a value of "NA" reported in that workload's field for the final tile.
The total score per cluster is the sum of all individual "Per-Tile Score" values shown in the "Performance" table.
The total score per host is the "Total Score per Cluster" divided by the "Number of Hosts". This value, along with the "Number of Hosts" is used in the benchmark metric.
The total number of SUT VMs used in the benchmark.
The total SUT hosts used in the benchmark. This value, along with the "Score per Host" is used in the benchmark metric.
The number of Tiles used in Phase 1. This value may include a decimal if a partial tile was used in Phase 1. Note that this is also the number tiles used in Phase 2.
The number of Tiles used in Phase 3. This value may include a decimal if a partial tile was used in Phase 3.
The following section of the report file provides details on any compliance issues were found during the error checking performed by the end-of-run result and report generation. If no issues were discovered, the message "No Validation Errors Found" is displayed.
This section displays any Qos errors reported.An example of this type of error is a query cycle time for one of the BigBench workloads exceeding 3600 seconds.
This section displays any errors detected caused by unexpected changes to values of fields in the raw file. An example of this type of error is modification of "fixed parameters" made to the raw file after the benchmark is run.
This section displays any errors detected that do not comply with the requirements for a valid submission. An example of this type of error is an invalid number of SUT hosts, i.e. not a multiple of 4 hosts. Another example is a change in a fixed parameter like "runTime" prior to the start of the run.
The following section of the report file describes the hardware and the software of the system under test (SUT) used to run the reported benchmark with the level of detail required to reproduce this result.
The following section of the report file describes the Virtualization product used on the SUT for this result.
Vendor of the virtualization product.
The name of the virtualization product, including its version number.
The following section of the report describes the SUT's host compute nodes.
This subsection describes the details of the host servers.
The number of SUT hosts used for the benchmark.
Server vendor of the SUT hosts.
Model name of the servers used as SUT hosts.
Processor name used by SUT server.
Maximum speed (in MHz) for Processor used by SUT server.
Number of cores, chips, cores per chip, and threads per core in each SUT server.
Total memory -- including units -- and number and type of DIMMs used in each SUT server. The recommended format for describing the types of DIMMs is described here:
DDR4 Format:
N x gg ss pheRxff PC4v-wwwwaa-m
References:
Example:
8 x 16 GB 2Rx4 PC4-2133P-R
Where:
x denotes the multiplication specifier
256 MB, 512 MB, 1 GB, 2 GB, 4 GB, 8 GB etc.
1R = 1 rank of DDR SDRAM installed
2R = 2 ranks
4R = 4 ranks
x4 = x4 organization (4 DQ lines per SDRAM)
x8 = x8 organization
x16 = x16 organization
PC4 = DDR4 SDRAM
J = 10-10-10
K = 11-11-11
L = 12-12-12
M = 13-13-13
N = 14-14-14
P = 15-15-15
R = 16-16-16
U = 18-18-18
E = Unbuffered DIMM ("UDIMM"), with ECC (x72 bit module data bus)
L = Load Reduced DIMM ("LRDIMM")
R = Registered DIMM ("RDIMM")
S = Small Outline DIMM ("SO-DIMM")
U = Unbuffered DIMM ("UDIMM"), no ECC (x64 bit module data bus)
T = Unbuffered 72-bit small outline DIMM ("72b-SO-DIMM")
Notes:
Operating system installed on each SUT server.
File system used by OS on each SUT server.
Brief description of any other hardware installed in each SUT server.
Brief description of any other software installed on each SUT server.
This subsection describes the availability dates for different SUT components
Latest availability date for the SUT hardware components, in MMM-YYYY format.
Latest availability date for the SUT host virtualization software components, in MMM-YYYY format.
Latest availability date for any other SUT components in MMM-YYYY format.
This subsection describes the details of the internal networking in the SUT servers.
Number and name of network adapters installed in each SUT server.
Number of network ports present in each SUT server.
Number of network pors used in each SUT server.
High level summary of the network type used by the network port in each SUT server.
Configured speed of network port in each SUT server.
This subsection describes the details of the internal storage in the SUT servers.
Number and name of storage controllers installed in each SUT server.
Number and name of storage controllers installed in each SUT server.
Number, size, and description of disks installed in each SUT server not used for common SUT storage.
RAID level used on disks installed in each SUT server.
Is UPS required for stable storage requirment on each SUT server?
This section describes the details of the SUT's hypervisor management server.
Vendor of management node application.
Name and version of management node application.
Is the management node a VM, vs. a physical server?
Other software running on management node
Vendor of server running management node application. If management node is a VM, use "N/A".
Model of server running management node application. if management node is a VM, use "N/A".
Processor used in management node. If management node is a VM, use processor name used by server hosting VM.
Maximum speed of processor used in management node, in MHz. If management node is a VM, use speed of processor used by server hosting VM.
Total number of processor cores in management node. If management node is a VM, use number of vCPUs configured for VM.
Total amount of memory in management node. If management node is a VM, use amount of memory assigned to VM.
Number and type of network adapters in management node. If management node is a VM, use number and type of vNICs used for VM.
Other hardware use in management node. If Management node is a VM, describe additional non-default virtual hardware configured for VM.
This section describes the details of the SUT's common storage used for hosting the workload VMs.
Vendor of hardware used for SUT's common storage. For example, name of vendor for external fibre channel enclosure.
Product name and model of hardware used for SUT's common storage.
Classification of SUT's common storage. For example, "16Gb fibre channel SAN" or "VMware vSAN".
Product name and description of storage hardware used to connect SUT common storage to SUT hosts.
Number and description of disks used for SUT's common storage.
Number and description of external disk controllers used for SUT's common storage.
Filesystem used for SUT's common storage.
RAID level used by disks configured for SUT's common storage.
This section describes the details of the SUT's external networking.
Product name and description of network hardware used to connect SUT workload VMs and client drivers.
Role of network interconnect. For example, "hypervisor management" or "SUT network communication"
Configured speed of SUT interconnect.
This section describes the details of the systems hosting the client driver VMs.
Operating system running on client hosts.
Total number of host systems used for client drivers.
Hardware vendor of client host systems.
Hardware model name of client host systems.
Additional hardware installed in client host systems.
Additional software installed on client host systems.
Name of Processor used in client host systems.
Maximum speed of processors used in client host systems.
Total number of cores present in client host systems.
Total amount of memory and DIMM type present in client host systems.
Number and name of network adapters present in client host systems.
The following section of the report file describes the configuration of the workload VMs. For each workload VM type, the following items are reported:
This subsection describes configuration for the departmental workload VMs.
This subsection describes configuration for the HammerDB workload VMs.
This subsection describes configuration for the BigBench workload VMs.
The following section of the report file contains detailed notes on performance tuning made on the SUT not captured in the previous sections.
It is required to report whether certain security vulnerabilities have been mitigated in the SUT (HW and/or OS). As of this writing, the disclosure takes the bulleted form below (choosing either "Yes" or " No" in response to the statement that follows it). See the public version of this document for updates.
These statements should be reported at the top of the Notes section. Use the NOTES[] fields in Testbed.config to populate this subsection.
This subsection provides detailed notes for the hardware configuration.
Notes on configuration details of the SUT's compute nodes. Use the HOST.HW.NOTES[] fields in Testbed.config to populate this subsection.
Notes on configuration details of the SUT's shared storage. Use the STORAGE.NOTES[] fields in Testbed.config to populate this subsection.
Notes on configuration details of the SUT's networking . Use the NETWORK.NOTES[] fields in Testbed.config to populate this subsection.
Notes on configuration details of the SUT's management node. Use the MANAGER.HW.NOTES[] fields in Testbed.config to populate this subsection.
This subsection provides detailed notes for the software configuration.
Notes on configuration details of the SUT's compute nodes. Use the HOST.SW.NOTES[] fields in Testbed.config to populate this subsection.
Notes on configuration details of the SUT's management node. Use the MANAGER.SW.NOTES[] fields in Testbed.config to populate this subsection.
This subsection provides detailed notes for the client driver configuration. Use the CLIENT.NOTES[] fields in Testbed.config to populate this subsection.
This subsection provides detailed notes any performance-relevant optimizations not covered in the other sections. Use the OTHER.NOTES[] fields in Testbed.config to populate this section.
The following section reports detailed performance information about the result. This information includes throughput, rate, and QoS metrics each tiles' workloads. The number of VM migrations made during the benchmark is also reported.
This section shows the total number of VM migrations made on the SUT during the benchmark's measurement interval.
This section reports the detailed performance metrics for tile #[x]. There will be an associated "Tile [x]" section for each tile used in the result.
Note that for the three departmental workloads, the information reported is of the same form and includes the following fields:
This subsection describes detailed performance statistics for this tile's mailserver workload.
This subsection describes detailed performance statistics for this tile's webserver workload.
This subsection describes detailed performance statistics for this tile's collaboration server workload. Note that the statistics for each collaboration server VM is reported separately.
This subsection describes detailed performance statistics for this tile's HammerDB workload.
Total time the HammerDB workload ran during measurement interval.
Total number of New Order (NO) transactions completed by the HammerDB workload ran during measurement interval.
Average rate of NO transactions achieved by the HammerDB workload ran during measurement interval.
This subsection describes detailed performance statistics for this tile's HammerDB workload.
Total time the BigBench workload ran during measurement interval.
Total number of queries completed by the BigBench workload during measurement interval.
Iteration number of the set of 11 queries run by the BigBench workload during measurement interval. Once all 11 queries are completed, another cycle will be started, up to a maximum of 6 cycles.
Number of BigBench queries completed for this query cycle.
Total duration of queries completed during this query cycle.
Product and service names mentioned herein may be the trademarks of their respective owners.
Copyright 2021 Standard Performance Evaluation Corporation (SPEC).
All Rights Reserved.