SPEC SERT 2 Result File Fields
SVN Revision: $Revision: 2551 $
Last updated: $Date:: 2017-11-07#$
ABSTRACT
This document describes the various fields in the SPEC SERT 2 result disclosure. The different result files are available in HTML and TXT format.
- Main Report File "results.html/.txt"
- includes a summary result chart and table, the system under test description, the controller system description, and the measurement devices description.
- Details Report File "results-details.html/.txt"
- shows the information from the main report file plus detailed performance and power values for each interval of all worklets.
(To check for possible updates to this document, please see https://www.spec.org/sert2/SERT-resultfilefields.html)
Overview
Selecting one of the following will take you to the detailed table of contents for that section:
1. SPEC SERT
3. Top Bar
4. Summary
6. SUT Notes
10. Aggregate Electrical and Environmental Data
11. Worklet Performance and Power Details
Detailed Contents
1. SPEC SERT
1.1 Test harness - Chauffeur
1.1.1 SERT Director
1.1.2 SERT Host
1.1.3 SERT Client
1.1.4 SERT Reporter
1.1.5 SERT Graphical User Interface
1.2 Workloads
1.3 The Power and Temperature Daemon
1.4 Result Validation and Report Generation
1.5 References
2. Main Report File
3. Top bar
3.1 Test sponsor
3.2 Software Availability
3.3 Tested by
3.4 Hardware/Firmware Availability
3.5 SPEC license #
3.6 System Source
3.7 Test Location
3.8 Test Date
3.9 SERT 2 Efficiency Score
4. Summary
4.1 Summary Chart
4.2 Aggregate SUT Data
4.2.1 Model
4.2.2 Number of Nodes
4.2.3 CPU Name
4.2.4 Total Number of Processors
4.2.5 Total Number of Cores
4.2.6 Total Number of Threads
4.2.7 Total Physical Memory
4.2.8 Total number of memory DIMMs
4.2.9 Total Number of Storage Devices
4.3 Workload Efficiency Score
4.4 Idle Watts
5. System Under Test
5.1 Shared Hardware
5.1.1 Enclosure
5.1.2 Form Factor
5.1.3 Server Blade Bays (populated / available)
5.1.4 Additional Hardware
5.1.5 Management Firmware Version
5.1.6 Power Supply Quantity (active / populated / bays)
5.1.7 Power Supply Details
5.1.8 Power Supply Operating Mode
5.1.9 Available Power Supply Modes
5.1.10 Network Switches (active / populated / bays)
5.1.11 Network Switch
5.2 Hardware per Node
5.2.1 Hardware Vendor
5.2.2 Model
5.2.3 Form Factor
5.2.4 CPU Name
5.2.5 CPU Frequency (MHz)
5.2.6 Number of CPU Sockets (populated / available)
5.2.7 CPU(s) Enabled
5.2.8 Number of NUMA Nodes
5.2.9 Hardware Threads / Core
5.2.10 Primary Cache
5.2.11 Secondary Cache
5.2.12 Tertiary Cache
5.2.13 Additional Cache
5.2.14 Additional CPU Characteristics
5.2.15 Total Memory Available to OS
5.2.16 Total Memory Amount (populated / maximum)
5.2.17 Total Memory Slots (populated / available)
5.2.18 Memory DIMMs
5.2.19 Memory Operating Mode
5.2.20 Power Supply Quantity (active / populated / bays)
5.2.21 Power Supply Details
5.2.22 Power Supply Operating Mode
5.2.23 Available Power Supply Modes
5.2.24 Disk Drive Bays (populated / available)
5.2.25 Disk Drive
5.2.26 Network Interface Cards
5.2.27 Management Controller or Service Processor
5.2.28 Expansion Slots (populated / available)
5.2.29 Optical Drives
5.2.30 Keyboard
5.2.31 Mouse
5.2.32 Monitor
5.2.33 Additional Hardware
5.3 Software per Node
5.3.1 Power Management
5.3.2 Operating System (OS)
5.3.3 OS Version
5.3.4 Filesystem
5.3.5 Other Software
5.3.6 Boot Firmware Version
5.3.7 Management Firmware Version
5.3.8 JVM Vendor
5.3.9 JVM Version
5.3.10 Client Configuration ID (formerly SERT Client Configuration)
6. System Under Test Notes
7. Details Report File
8. Worklet Summary
8.1 Result Chart
8.2 Result Table
8.2.1 Workload
8.2.2 Worklet
8.2.3 Normalized Peak Performance
8.2.4 Watts at Lowest Load Level
8.2.5 Watts at Highest Load Level
8.2.6 Geometric Mean of Normalized Performance
8.2.7 Geometric Mean of Power (Watts)
8.2.8 Worklet Efficiency Score
9. Measurement Devices
9.1 Power Analyzer
9.1.1 Hardware Vendor
9.1.2 Model
9.1.3 Serial Number
9.1.4 Connectivity
9.1.5 Input Connection
9.1.6 Metrology Institute
9.1.7 Calibration Laboratory
9.1.8 Calibration Label
9.1.9 Date of Calibration
9.1.10 PTDaemon Version
9.1.11 Setup Description
9.2 Temperature Sensor
9.2.1 Hardware Vendor
9.2.2 Model
9.2.3 Driver Version
9.2.4 Connectivity
9.2.5 PTDaemon Version
9.2.6 Sensor Placement
10. Aggregate Electrical and Environmental Data
10.1 Line Standard
10.2 Elevation (m)
10.3 Minimum Temperature (°C)
11. Worklet Performance and Power Details
11.1 Total Clients
11.2 CPU Threads per Client
11.3 Sample Client Command-line
11.4 Efficiency scores
11.4.1 Load Level
11.4.2 Raw Performance Score
11.4.3 Normalized Performance Score
11.4.4 Average Active Power (W)
11.4.5 Load Level Efficiency Score
11.5 Performance Data
11.5.1 Phase
11.5.2 Interval
11.5.3 Actual Load
11.5.4 Score
11.5.5 Host CV
11.5.6 Client CV
11.5.7 Elapsed Measurement Time (s)
11.5.8 Transaction
11.5.9 Transaction Count
11.5.10 Transaction Time (S)
11.6 Power Data
11.6.1 Phase
11.6.2 Interval
11.6.3 Analyzer
11.6.4 Average Voltage (V)
11.6.5 Average Current (A)
11.6.6 Current Range Setting
11.6.7 Average Power Factor
11.6.8 Average Active Power (W)
11.6.9 Power Measurement Uncertainty (%)
11.6.10 Minimum Temperature (°C)
1. SPEC SERT
SPEC SERT is the next generation SPEC tool for evaluating the power and performance of server class computers.
The tool consists of several software modules:
- Test harness - Chauffeur
- Workloads
- Power and Temperature Daemon (PTDaemon)
These modules work together in real-time to collect server power consumption and performance data by exercising the System Under Test (SUT) with predefined workloads.
1.1 Test harness - Chauffeur
The test harness (called Chauffeur): handles the logistical side of measuring and recording power data along with controlling the software installed on the SUT and controller system itself.
1.1.1 SERT Director
The Director reads test parameters and environment description information from the SERT configuration files and controls the execution of the test based on this information. It is the central control instance of the SERT and communicates with the other software modules described below via TCP/IP protocol. It also collects the result data from the worklet instances and stores them in the basic result file "results.xml".
1.1.2 SERT Host
This module is the main SERT module on the System Under Test (SUT). It must be launched manually by the tester and it starts the client modules executing the workloads under control of the Director.
1.1.3 SERT Client
One or more client instances each executing its own Java Virtual Machine (JVM) are started by the Host for every worklet. Each Client executes worklet code to stress the SUT and reports the performance data back to the Director for each phase of the test.
1.1.4 SERT Reporter
The reporter gathers the configuration, environmental, power and performance data from the "results.xml" file after a run is complete and compiles it into HTML and text or CSV format result files. It will be started automatically by the Director after finishing all workloads to create the default set of report files. Alternately it can be started manually to generate special report files from the information in the basic result file "results.xml".
1.1.5 SERT Graphical User Interface
A Graphical User Interface (GUI) to facilitate configuration and setup of test runs, allows real-time monitoring of test runs and to review the results is part of the test package. The SERT GUI leads the user through the steps of detecting or entering the hardware and software configuration, setting up a trial run or a valid test, displaying results reports and other functions common to the testing environment.
1.2 Workloads
The design goal for the SERT suite is to include all major aspects of server architecture, thus avoiding any preference for specific architectural features which might make a server look good under one workload and show disadvantages with another workload. The SERT workloads take advantage of different server capabilities by using various load patterns, which are intended to stress all major components of a server uniformly.
The SERT workloads consist of several different worklets, each stressing specific capabilities of a server. This approach furthermore supports generating individual efficiency scores for the different server components.
The worklets are built on synthetic tests which stress the different server components. Currently there are worklets for the following major server components:
- CPU
- Memory
- Storage IO
For a detailed description of workloads and worklets please read the SERT Design Document.
1.3 The Power and Temperature Daemon
The Power and Temperature Daemon (PTDaemon) is a single executable program that communicates with a power analyzer or a temperature sensor via the server's native RS-232 port, USB port or additionally installed interface cards, e.g. GPIB. It reports the power consumption or temperature readings to the Director via a TCP/IP socket connection. It supports a variety of RS-232, GPIB and USB interface command sets for a variety of power analyzers and temperature sensors. PTDaemon is the only SERT software module that is not Java based. Although it can be quite easily setup and run on a server other than the controller, it will typically reside on the controller.
1.4 Result Validation and Report Generation
At the beginning of each run, the test configuration parameters are logged in order to be available for later conformance checks. Warnings are displayed for any non-compliant properties and printed in the final report; however, the test will run to completion, producing a report that is not valid for publication.
At the end of a test run the report generator module is called to generate the report files described here from the data given in the configuration files and collected during the test run. Basic validity checks are performed to ensure that interval length, target load throughput, temperature, etc. are within the defined limits. For more information see the section "Validation / Verification" in the SERT Design Document.
1.5 References
More detailed information can be found in the documents shown in the following table. For the latest versions, please consult SPEC's website.
Run and Reporting Rules: | https://www.spec.org/sert2/SERT-runrules.pdf |
User Guide: | https://www.spec.org/sert2/SERT-userguide.pdf |
Design Document: | https://www.spec.org/sert2/SERT-designdocument.pdf |
Measurement Setup Guide: | https://www.spec.org/power/docs/SPECpower-Measurement_Setup_Guide.pdf |
Methodology: | https://www.spec.org/power/docs/SPECpower-Power_and_Performance_Methodology.pdf |
In this document all references to configurable parameters or result file fields are printed with different colors using the names from the configuration and result files:
Parameters from "test-environment.xml" are shown in red: <TestInformation><TestSponsor>
Parameters from "config-*.xml" or "*-configurations.xml" are shown in light purple: <suite><definitions><launch-definition><num-clients>
Parameters from "results.xml" are shown in green: <TestEnvironment><configuration><suite><client-configuration id>
The following configuration files are delivered with the test kit:
- config-all.xml
- The main SERT configuration file including the configurable parameters defining the execution of a SERT test run, e.g. the specification of storage test devices, durations of test intervals, enabling or disabling of workloads and/or worklets etc. Before editing this configuration file you should carefully read corresponding sections in the SERT User Guide, specifically section 6.1 "SERT Configuration and Start Procedure", see: https://www.spec.org/sert2/SERT-userguide.pdf.
- config-all-expert.xml
- An alternate version of the main SERT configuration file including extended configuration capabilities which can be used for non compliant modifications by experts. Please note that using this version of the main SERT configuration file will typically result in non compliant SERT runs. This file is meant for research usage by experienced SERT users.
- config-rangeSetting.xml
- This file includes definitions for an abbreviated SERT test with selected worklets and load levels only. It is intended to measure the maximum current for worklets and load levels in order to specify the amps range settings for the full SERT test. A description of how to use this configuration file is given in the SERT User Guide section "Power Analyzer Range Settings", see:https://www.spec.org/sert2/SERT-userguide.pdf.
- config-development.xml
- This file includes definitions for new worklets currently under development in addition to the default set of SERT worklets. Please note that using this version of the main SERT configuration file will result in non compliant SERT runs. It is intended for testing new worklets during the development phase. Currently configuration definitions for 2 experimental worklets, cpu_sleep and mem_flood_random, are available for testing.
- listeners.xml
- This file includes the interface and parameter definitions for possibly multiple PTDaemon instances and the Graphical User Interface (GUI). Especially the power analyzer range settings for the different worklets and load levels must be specified here.
- test-environment.xml
- This file includes a complete description of the SUT hardware and software. If the SERT is started via the GUI, some of the fields can be filled automatically from information found be the discovery scripts. The other fields must be edited manually by the tester.
- client-configurations-NNN.xml
- This file was introduced with the SERT V1.0.1 release. It specifies
predefined sets of JVM options for the different architecture / operating
system / JVM combinations to be used for running the tests. Starting with
SERT V1.1.0 a version number is appended to the file name.
The authoritative versions of this file are located on the SPEC web site. For the current SERT version: https://www.spec.org/sert2/client-configurations-2.0.xml
A default version of this file is included in the SERT kit. The user must ensure that this local copy is up to date. If the local copy is outdated the latest version has to be downloaded from the SPEC web site location given above. The SERTUI provides support for downloading the current version of this file from the SPEC web site to the local SERT folder.
- obsolete-client-configurations.xml
- This file was introduced in SERT V1.1.0. Configurations in this file are obsolete and should not be used. They are retained here for the purpose of validating old results. Any modifications to this file will be detected and result in being unable to validate results.
- custom-configurations.xml
- Standard client configuration information supplied by SPEC is defined in the file "client-configurations-NNN.xml". This "custom-configurations.xml" file can contain custom configuration data for platforms that are not yet supported by SERT or for research and development purposes. Use of custom configuration data will result in non-compliant SERT runs. Configurations defined in this file will not be selected automatically in the SERT UI, but can be chosen manually by their id. For command-line runs, the client-configuration can be referenced by ID in "config-all.xml".
2. Main Report File
This section gives an overview of the information and result fields in the main report file "results.html/.txt".
The report file headline reads SERT™ Report. Previous SERT releases supported a separate category of results for 32-bit environments. The report file headline for such 32-bit configurations showed the addendum: (32-bit category). 32-bit results are no longer supported by the current release.
The predefined default values of the parameters in the "test-environment.xml" file are intentionally incorrect. To highlight this, all parameters are defined with a leading underscore. The Reporter recognizes this and highlights these fields with a yellow background, except for the system name in the headline of the general information table.
3. Top bar
The top bar gives general information regarding this test run.
The top bar header shows the name of the hardware vendor (see Hardware Vendor) plus the model name (see Model), potentially followed by a "(Historical System)" designation for systems which are no longer commercially available.
3.1 Test sponsor
The name of the organization or individual that sponsored the test.
Generally, this is the name of the license holder.
<TestSponsor>
3.2 Software Availability
The date when all the software necessary to run the result
is generally available.
<Software><Availability>
The date must be specified in the format: YYYY-MM
For example, if the operating system is available in 2013-02, but the JVM is not available until 2013-04, then the software availability date is 2013-04 (unless some other component pushes it out farther).
3.3 Tested by
The name of the organization or individual that ran the test and submitted
the result.
<TestedBy>
3.4 Hardware/Firmware Availability
The date when all the hardware and related firmware modules
necessary to run the result are generally available.
<Hardware><Availability>
The date must be specified in the format: YYYY-MM
For example, if the CPU is available in 2013-02, but the Firmware version used for the test is not available until 2013-04, then the hardware availability date is 2013-04 (unless some other component pushes it out farther).
For systems which are no longer commercially available the original availability date must be specified here and the model name must be marked with the supplement "(Historical System)" (see Model).
Please see OSG Policy section 2.3.5 on SUT Availability for Historical Systems: https://www.spec.org/osg/policy.html#s2.3.5
3.5 SPEC license #
The SPEC license number of the organization or individual that ran the
test.
<SpecLicense>
3.6 System Source
Single Supplier or Parts Built.
<SystemUnderTest><SystemSource>
- Single Supplier
- a SUT configuration where all hardware is provided by a single supplier. For "Single Supplier" systems, all part description fields in the reports which require detailed information to identify the parts should include the system vendor name and the system vendor order number for the part.
- Parts Built
- a SUT configuration where hardware is provided by multiple suppliers. A "Parts Built" system disclosure must include enough detail to procure and reproduce all aspects of the submission, including performance and power. For "Parts Built" systems all part description fields in the reports which require detailed information to identify the parts must include the part's manufacturer name and the manufacturer's part number to describe the devices.
3.7 Test Location
The name of the city, state, and country the test took place. If there are
installations in multiple geographic locations, that must also be listed in
this field.
<Location>
3.8 Test Date
The date on which the test for this result was performed. This information is provided automatically by the test software based on the timer function of the Controller system.
3.9 SERT 2 Efficiency Score
This field was introduced in SERT V2.0.0 and renamed to "SERT 2 Efficiency Score" in SERT V2.0.1.
It is intended for use in Energy Efficiency Regulatory Programs of government agencies around the world. This score is calculated as the weighted geometric mean of the individual workload efficiency scores of the CPU, Storage and Memory workloads (see also Workload Efficiency Score).
Important note:
In SERT V2.0.0 the weighted geometric mean formula for calculating the composite efficiency score was using wrong weights of CPU and Memory workload efficiency scores resulting in an incorrect SERT Efficiency Score being printed in SERT V2.0.0 result reports.
This problem is corrected in the new SERT V2.0.1 release. In order to distinguish old (incorrect) results from new (correct) results the score name has changed from "SERT Efficiency Score" to "SERT 2 Efficiency Score".
Though the overall SERT efficiency score is incorrect in SERT V2.0.0 result reports the measured worklet and workload efficiency scores of this SERT version are correct. So existing SERT V2.0.0 results can be converted to valid SERT V2.0.1 results based on the measurement data in the "results.xml" file. There's no need to rerun measurements with the new SERT V2.0.1 version.
SPEC provides a tool for this conversion process described here:
https://www.spec.org/sert2/sert_patches/sert-conversion.html
Follow the guidelines given on this web page for generating SERT V2.0.1 result reports from your existing SERT V2.0.0 "results.xml" file.
The tool converts an existing SERT V2.0.0 "results.xml" file to a new "results.updated.xml" file including a corrected overall efficiency score.
Please note that the SERT version number in this file is set to 2.0.0.1 in order to distinguish updated results from native SERT V2.0.1 results.
Valid SERT V2.0.1 report files can then be generated from the "results.updated.xml" file following the instructions in section 6.2 "Generate report files with the reporter scripts" of the SERT User Guide.
A more detailed description of the SERT metric is given in the SERT Metric Document.
4 Summary
With SERT V2.0.0 an overall server efficiency score was introduced (see also SERT 2 Efficiency Score).
4.1 Summary Chart
This chart was introduced in SERT V2.0.0.
A chart with a graphical representation of the individual workload and the overall server efficiency scores was added in this section.
Main attributes of the summary chart are:
- Header
- The model name of the System Under Test (SUT). It is included in the chart for explicit identification of this result, especially to enforce always showing the summary chart together with the tested SUT.
- Y-axis
- Vertical bars each representing one workload.
- X-axis
- A linear scale of numbers used for showing the workload and overall
efficiency scores.
Note: The scaling of the data range on the x-axis can be different depending on the maximum value to be displayed. Therefore individual summary charts from different SERT result reports may not be directly comparable.
- SERT 2 Efficiency Score
- The vertical green bar represents the overall SERT efficency score. The SERT efficiency score is also printed here again as a numerical value (see also SERT 2 Efficiency Score).
Note: For invalid results the chart won't show any data.
4.2 Aggregate SUT Data
Aggregated values for several system configuration parameters are reported in this table.
4.2.1 Model
This field was introduced in SERT V2.0.0.
The model name identifying the system under test. The reported value is derived by the test software from the information given in the configuration files and by the test discovery scripts.
4.2.2 Number of Nodes
The total number of all nodes used for running the test. The reported values are calculated by the test software from the information given in the configuration files and by the test startup scripts.
4.2.3 CPU Name
This field was introduced in SERT V2.0.0.
Name of the tested CPU model. The reported value is derived by the test software from the information given in the configuration files and by the test discovery scripts.
4.2.4 Total Number of Processors
The number of processor chips per node. For multi node results the total number of all chips used for running the test is added. The reported values are calculated by the test software from the information given in the configuration files and by the test discovery scripts.
4.2.5 Total Number of Cores
The total number of all cores used for running the test. The reported value is calculated by the test software from the information given in the configuration files and by the test discovery scripts.
4.2.6 Total Number of Threads
The total number of all hardware threads used for running the test. The reported value is calculated by the test software from the information given in the configuration files and by the test discovery scripts.
4.2.7 Total Physical Memory
The total memory size for all systems used to run the test. The reported values are calculated by the test software from the information given in the configuration files and by the test discovery scripts.
4.2.8 Total number of memory DIMMs
This field was introduced in SERT V2.0.0.
The total number of DIMMs included in the tested SUT configuration. The reported value is calculated by the test software from the information given in the configuration files and by the test discovery scripts.
4.2.9 Total Number of Storage Devices
The total number of all storage devices used for running the test. The reported value is calculated by the test software from the information given in the configuration files and by the test discovery scripts.
4.3 Workload Efficiency Score
The efficiency score for each workload is calculated from the efficiency scores of all its worklets as:
Geometric Mean (Efficiency ScoreWorklet 1...n)
Efficiency scores for the different workloads can be extremely dissimilar due to configuration varietes, which may be favorable for some workloads only, e.g. additional DIMMs for the Memory workload or disk drives for the Storage workload. Typically these changes wouldn't influence the CPU workload score perceivably (see also Worklet Efficiency Score).
4.4 Idle Watts
The average-watts measured for the Idle worklet test interval (see also Watts at Lowest Load Level). By definition there is no performance value for the Idle worklet and therefore also no efficiency score can be calculated.
The Idle power is NOT included in the efficiency score calculation of the 3 workloads.
5. System under test
The following section of the report file describes the hardware and the software of the System Under Test (SUT) used to run the reported SERT results with the level of detail required to reproduce this result.
5.1 Shared Hardware
A table including the description of the shared hardware components. This table will be printed for multi node results only and is not included in single node report files.
5.1.1 Enclosure
The model name identifying the enclosure housing the tested nodes.
<SystemUnderTest><SharedHardware><Enclosure>
5.1.2 Form Factor
The full SUT form factor (including all nodes and any shared hardware).
<SystemUnderTest><SharedHardware><FormFactor>
For rack-mounted systems, specify the number of rack units. For other types of enclosures, specify "Tower" or "Other".
5.1.3 Server Blade Bays (populated / available)
This field is divided into 2 parts separated by a slash. The first part
specifies the number of bays populated with a compute node or server blade.
The second part shows the number of available bays for server blades in the
enclosure.
<SystemUnderTest><SharedHardware><BladeBays><Populated>
<SystemUnderTest><SharedHardware><BladeBays><Available>
5.1.4 Additional Hardware
Any additional shared equipment added to improve performance and required to
achieve the reported scores.
<SharedHardware><Other><OtherHardware>
For each additional type of hardware component the quantity and a description need to be specified.
5.1.5 Management Firmware Version
A version number or string identifying the management firmware running on
the SUT enclosure or "None" if no management controller was
installed.
<SharedHardware><Firmware><Management><Version>
5.1.6 Power Supply Quantity (active / populated / bays)
This field is divided into 3 parts separated by slashes.
The first part shows the number of active power supplies, which might be
lower than the next number, if some power supplies are in standby mode and
used in case of failure only.
<SharedHardware><PowerSupplies><PowerSupply><Active>
The second part gives the number of bays populated with a power supply.
<SharedHardware><PowerSupplies><PowerSupply><Populated>
The third part describes the number of power supply bays available in the
SUT enclosure.
<SharedHardware><PowerSupplies><Bays>
5.1.7 Power Supply Details
The number and watts rating of this power supply unit (PSU) plus the supplier and the part number to identify it.
In the case of a "Parts Built" system (see:
System Source) the manufacturer name and the
part number of the PSU must be specified here.
<SharedHardware><PowerSupplies><PowerSupply><Active>
<SharedHardware><PowerSupplies><PowerSupply><RatingInWatts>
<SharedHardware><PowerSupplies><PowerSupply><Description>
There may be multiple lines in this field if different types of PSUs have been used for this test, one for each PSU type.
5.1.8 Power Supply Operating Mode
Power supply unit (PSU) operating mode active for running this test. Must be
one of the available modes as described in the field
Available Power Supply Modes.
<SharedHardware><PowerSupplies><OperatingMode>
5.1.9 Available Power Supply Modes
The available power supply unit (PSU) modes depend on the capabilities of
the tested server hardware and firmware.
<SharedHardware><PowerSupplies><AvailableModes><Mode>
Typical power supply modes are:
- Standard
- All populated PSUs are active
- PSU Redundancy
- N + M Spare PSU
For example: 2 + 1 Spare PSU
Two PSUs are active, the third PSU is inactive in Standby mode. System operation is guaranteed for 1 PSU fail in case of 3 PSUs in total.
- AC Redundancy
- N + N (2 AC sources)
For example: 2 + 2 (2 AC sources)
2 PSUs are active, the other two PSUs are inactive in Standby mode. 2 of the 4 PSUs are each connected to a separate AC source. This ensures that the system can continue operation even if a power line or a single PSU fails.
5.1.10 Network Switches (active / populated / bays)
This field is divided into 3 parts separated by slashes.
The first part shows the number of active network switches, which might be
lower than the next number, if some network switches are in standby mode and
not used for running the test.
<SharedHardware><NetworkSwitches><NetworkSwitch><Active>
The second part gives the number of bays populated with a network
switch.
<SharedHardware><NetworkSwitches><NetworkSwitch><Populated>
The third part describes the number of network switch bays available in the
SUT enclosure.
<SharedHardware><NetworkSwitches><Bays>
"N/A" if no network switch was used.
5.1.11 Network Switch
The number, a description (manufacturer and model name), and details
(special settings, etc.) of the network switch(es) used for this
test.
<SharedHardware><NetworkSwitches><NetworkSwitch><Active>
<SharedHardware><NetworkSwitches><NetworkSwitch><Description>,
"N/A" if no network switch was used.
<SharedHardware><NetworkSwitches><NetworkSwitch><Details>
5.2 Hardware per Node
This section describes in detail the different hardware components of the system under test which are important to achieve the reported result.
5.2.1 Hardware Vendor
Company which sells the hardware.
<SystemUnderTest><Node><Hardware><Vendor>
5.2.2 Model
The model name identifying the system under test.
<SystemUnderTest><Node><Hardware><Model>
Systems which are no longer commercially available should be marked with the
supplement "(Historical System)".
<SystemUnderTest><Node><Hardware><Historical>
Please see OSG Policy section 2.3.5 on SUT Availability for Historical Systems https://www.spec.org/osg/policy.html#s2.3.5.
5.2.3 Form Factor
The form factor for this system.
<SystemUnderTest><Node><Hardware><FormFactor>
In multi-node configurations, this is the form factor for a single node. For rack-mounted systems, specify the number of rack units. For blades, specify "Blade". For other types of systems, specify "Tower" or "Other".
5.2.4 CPU Name
A manufacturer-determined processor formal name.
<SystemUnderTest><Node><Hardware><CPU><Name>
Trademark or copyright characters must not be included in this string. No additional information is allowed here, e.g. turbo boost frequency or hardware threads.
Examples:
- AMD Opteron 6370P
- Fujitsu SPARC64 X+
- IBM POWER8
- Intel Xeon E5-2660 v3
- Oracle SPARC M7
5.2.5 CPU Frequency (MHz)
The nominal (marked) clock frequency of the CPU, expressed in
megahertz.
<SystemUnderTest><Node><Hardware><CPU><FrequencyMHz>
If the CPU is capable of automatically running the processor core(s) faster
than the nominal frequency and this feature is enabled, then this additional
information must be listed here, at least the maximum frequency and the use
of this feature.
<SystemUnderTest><Node><Hardware><CPU><TurboFrequencyMHz>
<SystemUnderTest><Node><Hardware><CPU><TurboMode>
Furthermore if the enabled/disabled status of this feature is changed from
the default setting this must be documented in the
System Under Test Notes field.
<SystemUnderTest><Notes><Note>
Example:
- 2900 MHz (up to 3600 MHz), turbo mode enabled
5.2.6 Number of CPU Sockets (populated / available)
This field is divided into 2 parts separated by a slash. The first part
gives the number of sockets populated with a CPU chip as used for this SERT
result and the second part the number of available CPU sockets.
<SystemUnderTest><Node><Hardware><CPU><PopulatedSockets>
<SystemUnderTest><Node><Hardware><AvailableSockets>
5.2.7 CPU(s) enabled
The CPUs that were enabled and active during the test run, displayed as the
number of cores, number of processors, and the number of cores per
processor.
<SystemUnderTest><Node><Hardware><CPU><Cores>
<SystemUnderTest><Node><Hardware><CPU><PopulatedSockets>
<SystemUnderTest><Node><Hardware><CPU><CoresPerChip>
5.2.8 Number of NUMA Nodes
The number of Non-Uniform Memory Access (NUMA) nodes used for this SERT
test. Typically this is equal to the number of populated sockets times 1 or
2 depending on the CPU architecture.
<SystemUnderTest><Node><Hardware><NumaNodes>
5.2.9 Hardware Threads / Core
The total number of active hardware threads for this SERT test and the
number of hardware threads per core given in parentheses.
<SystemUnderTest><Node><Hardware><CPU><HardwareThreadsPerCore>
5.2.10 Primary Cache
Description (size and organization) of the CPU's primary cache. This cache
is also referred to as "L1 cache".
<SystemUnderTest><Node><Hardware><CPU><Cache><Primary>
5.2.11 Secondary Cache
Description (size and organization) of the CPU's secondary cache. This cache
is also referred to as "L2 cache".
<SystemUnderTest><Node><Hardware><CPU><Cache><Secondary>
5.2.12 Tertiary Cache
Description (size and organization) of the CPU's tertiary, or "L3"
cache.
<SystemUnderTest><Node><Hardware><CPU><Cache><Tertiary>
5.2.13 Additional Cache
Description (size and organization) of any other levels of cache
memory.
<SystemUnderTest><Node><Hardware><CPU><Cache><Other>
5.2.14 Additional CPU Characteristics
Additional technical characteristics to help identify the processor.
<SystemUnderTest><Node><Hardware><CPU><OtherCharacteristics>
5.2.15 Total Memory Available to OS
Total memory capacity in GB available to the operating system for task processing. This number is typically slightly lower then the amount of configured physical memory. It is determined automatically by the SERT discovery tools.
For multi-node runs, this is the average memory reported by each host. Prior to SERT 1.1.1 it was the sum of the memory across all hosts.
5.2.16 Total Memory Amount (populated / maximum)
This field is divided into 2 parts separated by a slash. The first part
describes the amount of installed physical memory in GB as used for this
SERT test. The second number gives the maximum possible memory capacity in
GB if all memory slots are populated with the highest capacity DIMMs
available in the SUT.
<SystemUnderTest><Node><Hardware><Memory><SizeMB>
<SystemUnderTest><Node><Hardware><Memory><MaximumSizeMB>
5.2.17 Total Memory Slots (populated / available)
This field is divided into 2 parts separated by a slash. The first part
describes the number of memory slots populated with a memory module as used
for this SERT test. The second part shows the total number of available
memory slots in the SUT.
<SystemUnderTest><Node><Hardware><Memory><Dimms><Quantity>
<SystemUnderTest><Node><Hardware><Memory><AvailableSlots>
5.2.18 Memory DIMMs
Detailed description of the system main memory technology, sufficient for
identifying the memory used in this test.
<SystemUnderTest><Node><Hardware><Memory><Dimms><Quantity>
<SystemUnderTest><Node><Hardware><Memory><Dimms><DimmSizeMB>
<SystemUnderTest><Node><Hardware><Memory><Dimms><Description>
There may be multiple instances of this field if different types of DIMMs have been used for this test, one separate field for each DIMM type.
Since the introduction of DDR4 memory there are two slightly different formats. The recommended formats are described here.
DDR4 Format:
N x gg ss pheRxff PC4v-wwwwaa-m; slots k, ... l populated
References:
- JEDEC Standard No. 21C http://www.jedec.org/standards-documents/docs/module4_20_25
- DDR4 SDRAM http://www.jedec.org/standards-documents/docs/jesd79-4B
For example:
8 x 16 GB 2Rx4 PC4-2133P-R; slots 1 - 8 populated
Where:
- N = number of DIMMs used
x denotes the multiplication specifier - gg ss = size of each DIMM, including unit specifier
256 MB, 512 MB, 1 GB, 2 GB, 4 GB, 8 GB, etc. - pheR = p=number ranks; he=encoding for certain packaging, often
blank
1R = 1 rank of DDR SDRAM installed
2R = 2 ranks
4R = 4 ranks - xff = Device organization (bit width) of DDR SDRAMs used on this
assembly
x4 = x4 organization (4 DQ lines per SDRAM)
x8 = x8 organization
x16 = x16 organization - PCy = Memory module technology standard
PC4 = DDR4 SDRAM - v = Module component supply voltage values
e.g. <blank> for 1.2V, L for Low Voltage (currently not defined) - wwww = Module speed in Mb/s/data pin
e.g. 1866, 2133, 2400 - aa = speed grade, e.g.
J = 10-10-10
K = 11-11-11
L = 12-12-12
M = 13-13-13
N = 14-14-14
P = 15-15-15
R = 16-16-16
U = 18-18-18 - m = Module Type
E = Unbuffered DIMM ("UDIMM"), with ECC (x72 bit module data bus)
L = Load Reduced DIMM ("LRDIMM")
R = Registered DIMM ("RDIMM")
S = Small Outline DIMM ("SO-DIMM")
U = Unbuffered DIMM ("UDIMM"), no ECC (x64 bit module data bus)
T = Unbuffered 72-bit small outline DIMM ("72b-SO-DIMM") - slots k, ... l = Numbers denoting the mother board memory slots populated with the memory modules described before
Note: The main string "gg ss pheRxff PC4v-wwwwaa-m" can be read directly from the label on the memory module itself for all vendors who use JEDEC standard labels.
DDR3 Format:
N x gg ss eRxff PChv-wwwwwm-aa, ECC CLa; slots k, ... l populated
Reference:
- "DDR3 DIMM Label", PRN09-NM4, October 2009 http://www.jedec.org/standards-documents/docs/pr-n09-nm1
For example:
8 x 8 GB 2Rx4 PC3L-12800R-11, ECC CL10; slots 1 - 8 populated
Where:
- N = number of DIMMs used
x denotes the multiplication specifier - gg ss = size of each DIMM, including unit specifier
256 MB, 512 MB, 1 GB, 2 GB, 4 GB, 8 GB, etc. - eR = Number of ranks of memory installed
1R = 1 rank of DDR SDRAM installed
2R = 2 ranks
4R = 4 ranks - xff = Device organization (bit width) of DDR SDRAMs used on this
assembly
x4 = x4 organization (4 DQ lines per SDRAM)
x8 = x8 organization
x16 = x16 organization - PCy = Memory module technology standard
PC2 = DDR2 SDRAM
PC3 = DDR3 SDRAM - v = Module component supply voltage values
e.g. <blank> for 1.5V, L for 1.35V - wwwww = Module bandwidth in MB/s
For example:
8500 = 8.53 GB/s (corresponds to 1066 MHz)
10600 = 10.66 GB/s (corresponds to 1333 MHz)
12800 = 12.80 GB/s (corresponds to 1600 MHz)
14900 = 14.90 GB/s (corresponds to 1866 MHz) - m = Module Type
E = Unbuffered DIMM ("UDIMM"), with ECC (x72 bit module data bus)
F = Fully Buffered DIMM ("FB-DIMM")
M = Micro-DIMM
N = Mini-Registered DIMM ("Mini-RDIMM"), no address/command parity function
P = Registered DIMM ("RDIMM"), with address/command parity function
R = Registered DIMM, no address/command parity function
S = Small Outline DIMM ("SO-DIMM")
U = Unbuffered DIMM ("UDIMM"), no ECC (x64 bit module data bus) - aa = DDR SDRAM CAS Latency in clocks at maximum operating frequency
- ECC = Additional specification for modules which have ECC (Error Correction Code) capabilities
- CLa = CAS latency if the tester has changed the latency to something other than the default
- slots k, ... l = Numbers denoting the mother board memory slots populated with the memory modules described before
5.2.19 Memory Operating Mode
Description of the memory operating mode. Examples of possible values are:
Standard, Mirror, Spare, Independent
<SystemUnderTest><Node><Hardware><Memory><OperatingMode>
5.2.20 Power Supply Quantity (active / populated / bays)
This field is divided into 3 parts separated by slashes. The first part
shows the number of active power supplies, which might be lower than the
next number, if some power supplies are in standby mode and used in case of
failure only. The second part gives the number of bays populated with a
power supply. The third part describes the number of power supply bays
available in this node.
<SystemUnderTest><Node><Hardware><PowerSupplies><PowerSupply><Active>
<SystemUnderTest><Node><Hardware><PowerSupplies><PowerSupply><Populated>
<SystemUnderTest><Node><Hardware><PowerSupplies><Bays>
All three parts can can show "None" if the node is powered by a shared power supply.
5.2.21 Power Supply Details
The number and watts rating of this power supply unit (PSU) plus the supplier name and the order number to identify it.
In case of a "Parts Built" system (see
System Source) the manufacturer and the part
number of the PSU must be specified here.
<SystemUnderTest><Node><Hardware><PowerSupplies><PowerSupply><Active>
<SystemUnderTest><Node><Hardware><PowerSupplies><PowerSupply><RatingInWatts>
<SystemUnderTest><Node><Hardware><PowerSupplies><PowerSupply><Description>
There may be multiple lines in this field if different types of PSUs have been used for this test, one for each PSU type.
"N/A" if this node does not include a power supply.
5.2.22 Power Supply Operating Mode
Operating mode active for running this test. Must be one of the available
modes as described in the field Available Power
Supply Modes.
<SystemUnderTest><Node><Hardware><PowerSupplies><OperatingMode>
5.2.23 Available Power Supply Modes
The available power supply unit (PSU) modes depend on the capabilities of
the tested server hardware and firmware.
<SystemUnderTest><Node><Hardware><PowerSupplies><AvailableModes>
Typical power supply modes are:
- Standard
- All populated PSUs are active
- PSU Redundancy
- N + M Spare PSU
For example: 2 + 1 Spare PSU
Two PSUs are active, the third PSU is inactive in Standby mode. System operation is guaranteed for 1 PSU fail in case of 3 PSUs in total.
- AC Redundancy
- N + N (2 AC sources)
For example: 2 + 2 (2 AC sources)
2 PSUs are active, the other two PSUs are inactive in Standby mode. 2 of the 4 PSUs are each connected to a separate AC source. This ensures that the system can continue operation even if a power line or a single PSU fails.
5.2.24 Disk Drive Bays (populated / available)
This field is divided into 2 parts separated by a slash. The first part
gives the number of disk drive bays actually populated with a disk drive for
this SERT test. The second part shows the number of available drive bays in
the SUT, some of which may have been empty in the tested configuration.
<SystemUnderTest><Node><Hardware><DiskDrives><DiskGroup><Quantity>
<SystemUnderTest><Node><Hardware><DiskDrives><Bays>
Disk drives may be of different type in heterogenous multi disk configurations. In this case separate Disk Drive fields need to be specified for each type, describing its capabilities.
5.2.25 Disk Drive
This field contains four rows. In case of heterogenous multi disk configurations there may be several instances of this field.
- Row 1 shows a description of the disk drive(s) (count, supplier name and
order number) installed on the SUT. In case of a "Parts Built"
system (see System Source) the manufacturer
name and the part number of the disk drive must be specified here.
<SystemUnderTest><Node><Hardware><DiskDrives><DiskGroup><Quantity>
<SystemUnderTest><Node><Hardware><DiskDrives><DiskGroup><Vendor>
<SystemUnderTest><Node><Hardware><DiskDrives><DiskGroup><Model> - Row 2 displays the main technical parameters of the disk drive(s) (size,
connectivity and rotational speed) installed on the SUT.
<SystemUnderTest><Node><Hardware><DiskDrives><DiskGroup><CapacityGB>
<SystemUnderTest><Node><Hardware><DiskDrives><DiskGroup><Connectivity>
<SystemUnderTest><Node><Hardware><DiskDrives><DiskGroup><Type> - Row 3 describes the manufacturer and model number of the controller used
to drive the disk(s).
<SystemUnderTest><Node><Hardware><DiskDrives><DiskGroup><Controller> - Row 4 includes a confirmation that all controller and disk caches have
been switched off (write through mode) for running the SERT storage
worklets.
<SystemUnderTest><Node><Hardware><DiskDrives><DiskGroup><Settings>
This is a prerequisite to achieve comparable test results.
5.2.26 Network Interface Cards
This field contains three rows. In case of heterogenous configurations with different Network Interface Cards (NICs) there may be several instances of this field.
- Row 1 shows a description of the network controller(s) (number, supplier
name, order number and port count) installed on the SUT. In case of a
"Parts Built" system (see System
Source) the manufacturer name and the part number of the network
controller must be specified here.
<SystemUnderTest><Node><Hardware><NetworkInterfaces><NicGroup><Quantity>
<SystemUnderTest><Node><Hardware><NetworkInterfaces><NicGroup><Description> - Row 2 is divided into 3 comma separated parts. For example: "1
connected, 4 enabled in OS, 4 enabled in firmware". The first part
gives the number of physically linked ports. The remaining 2 parts give
the number of ports enabled in the operating system and in the firmware
respectively.
<SystemUnderTest><Node><Hardware><NetworkInterfaces><NicGroup><Enabled><Connected>
<SystemUnderTest><Node><Hardware><NetworkInterfaces><NicGroup><Enabled><OS>
<SystemUnderTest><Node><Hardware><NetworkInterfaces><NicGroup><Enabled><Firmware> - Row 3 describes the configured transfer rate as used for the SERT test
run in Mbit/s.
<SystemUnderTest><Node><Hardware><NetworkInterfaces><NicGroup><SpeedMbps>
5.2.27 Management Controller or Service Processor
Specifies whether any management controller was configured in the SUT.
<SystemUnderTest><Node><Hardware><ManagementController><Quantity>
5.2.28 Expansion Slots (populated / available)
This field is divided into 2 parts separated by a slash. There may be multiple lines in this field if different types of expansion slots are available, one for each slot type.
The first part gives the number of expansion slots (PCI slots) actually
populated with a card for this SERT test. The second part shows the number
of available expansion slots in the SUT; some of them may have been empty in
the tested configuration.
<SystemUnderTest><Node><Hardware><ExpansionSlots><ExpansionSlot><Populated>
<SystemUnderTest><Node><Hardware><ExpansionSlots><ExpansionSlot><Quantity>
5.2.29 Optical Drives
Specifies whether any optical drives were configured in the SUT.
<SystemUnderTest><Node><Hardware><OpticalDrives>
5.2.30 Keyboard
The type of keyboard (USB, PS2, KVM or None) used.
<SystemUnderTest><Node><Hardware><Keyboard>
5.2.31 Mouse
The type of mouse (USB, PS2, KVM or None) used.
<SystemUnderTest><Node><Hardware><Mouse>
5.2.32 Monitor
Specifies if a monitor was used for the test and how it was connected
(directly or via KVM).
<SystemUnderTest><Node><Hardware><Monitor>
5.2.33 Additional Hardware
Number and description of any additional equipment added to improve
performance and required to achieve the reported scores.
<SystemUnderTest><Node><Hardware><Other><OtherHardware><Quantity>
<SystemUnderTest><Node><Hardware><Other><OtherHardware><Description>
5.3 Software per Node
This section describes in detail the various software components installed on the system under test, which are critical to achieve the reported result, and their configuration parameters.
5.3.1 Power Management
This field shows whether power management features of the SUT were enabled
or disabled.
<SystemUnderTest><Node><Software><OperatingSystem><PowerManagement>
5.3.2 Operating System (OS)
Operating system vendor and name.
<SystemUnderTest><Node><Software><OperatingSystem><Vendor>
<SystemUnderTest><Node><Software><OperatingSystem><Name>
Examples:
- IBM AIX
- Microsoft Corporation Windows Server 2008 R2 Enterprise SP1
- Oracle Solaris
- Red Hat Enterprise Linux 6.4
- SUSE Linux Enterprise Server 11 SP3
5.3.3 OS Version
The operating system version. For Unix based operating systems the detailed
kernel number must be given here. If there are patches applied that affect
performance and / or power, they must be disclosed in the
System Under Test Notes.
<SystemUnderTest><Node><Software><OperatingSystem><Version>
Examples:
- Windows:
Version 6.1.7601 Service Pack 1 Build 7601 - Linux:
2.6.32-358.el6.x86_64 - Solaris:
11.3 - AIX:
7100-00-10-1334
5.3.4 File System
The type of the filesystem containing the operating system files and
directories and test files for the storage worklets.
<SystemUnderTest><Node><Software><OperatingSystem><FileSystem>
5.3.5 Additional Software
Any performance- and/or power-relevant software used and required to
reproduce the reported scores, including third-party libraries,
accelerators, etc.
<SystemUnderTest><Node><Software><Other><OtherSoftware>
5.3.6 Boot Firmware Version
A version number or string identifying the boot firmware installed on the
SUT.
<SystemUnderTest><Node><Firmware><Boot><Version>
5.3.7 Management Firmware Version
A version number or string identifying the management firmware running on
the SUT or "None" if no management controller was installed.
<SystemUnderTest><Node><Firmware><Management><Version>
5.3.8 JVM Vendor
The company that makes the JVM software.
<SystemUnderTest><Node><JVM><Vendor>
5.3.9 JVM Version
Name and version of the JVM software product, as displayed by the
"java -version" or "java -fullversion"
commands.
<SystemUnderTest><Node><JVM><Version>
Examples:
- Oracle Hotspot:
Java SE Runtime Environment (build 1.7.0_71-b14) - IBM J9:
JRE 1.7.0 IBM Linux build pxa6470sr7fp1-20140708_01(SR7 FP1)
5.3.10 Client Configuration ID (formerly SERT Client Configuration)
Beginning with SERT V1.0.1 this field shows the label of the client configuration element from the "client-configurations-NNN.xml" file specifying the predefined set of JVM options and number of clients to be used for running the tests.
A default version of this file is included in the SERT kit. The user has to
ensure that this local copy is up to date compared to the master copy of
this file available at:
https://www.spec.org/sert2/client-configurations-2.0.xml
If the local copy is outdated the latest version has to be downloaded from the SPEC web site location given above.
The correct option set for the given configuration is largely determined automatically by SERTUI based on the configuration parameters detected by the hardware discovery scripts. It's strongly recommended to manually check the detected parameters and the automatically selected option set for correctness.
If the SERT is started from command line the correct JVM option set for the
given configuration must be specified manually in the
"config-all.xml" file.
<suite><definitions><option-set><parameter>
The content displayed here is taken from the "results.xml"
file generated during the test run.
<TestEnvironment><configuration><suite><client-configuration id>
6. System Under Test Notes
Free text description of what sort of tuning one has to do to the SUT to get
these results. Also additional hardware information not covered in the other
fields above can be given here.
<SystemUnderTest><Node><Notes>
The following list shows examples of information that must be reported in this section:
- System tuning parameters other than default
- Processor tuning parameters other than default
- Process tuning parameters other than default
- Changes to the background load, if any, e.g. disabling OS services
- Critical customer-identifiable firmware or option versions such as network and disk controllers.
- Definitions of tuning parameters must be included.
- Part numbers or sufficient information that would allow the end user to order the SUT configuration.
- Identification of any components used that are supported but that are no longer orderable by ordinary customers.
- OS patches that affect performance and / or power.
Note: Disabling of OS services is disallowed (see section 2.4.5 "Software" of the SERT Run and Reporting Rules), and will invalidate the result.
7. Details Report File
The details report file "results-details.html/.txt" is created together with the standard report file at the end of each successful SERT run. In addition to the information in the standard report file described above it includes more detailed performance and power result values for each individual worklet.
8. Worklet Summary
This section describes the main results for all worklets in a table and as a graph. It is included in the details report file "results-details.html/.txt" only.
8.1 Result Chart
The result chart graphically displays the power, performance and efficiency scores for the different worklets. Each worklet is presented on a separate row beginning with the worklet name on the left. The lines are printed with distinct background colors for the different workloads:
- light blue = CPU workload (including the SSJ worklet previously included in the Hybrid workload, which was removed for SERT 2.0.0)
- light green = Storage workload
- light red = Memory workload
The result chart is divided in three sections.
- Watts
- The leftmost section displays the range of measured power consumption for all load levels. The Watts values are given on vertical scales at the top of each workload section. The power range is represented by a blue line with a dot at the left for minimum power corresponding to the lowest load level and a dot at the right for maximum power corresponding to the highest load level. The intermediate load levels, if defined, are represented by small vertical bars. The vertical blue line across all worklets within each workload, named "Idle Power" represents the power consumption of the idle worklet. It does not correspond to any load.
- Normalized Performance
- The middle section displays the range of normalized performance values for all load levels. The normalized performance values are given on vertical scales at the top of each workload section. These values are calculated separately for each load level by taking the interval performance score and dividing it by the corresponding reference score for that worklet. The reference score for each worklet was determined taking the average performance score over several SERT test runs on a well defined reference configuration under different operating systems. The performance range is represented by a red line with a dot at the left for minimum performance corresponding to the lowest load level and a dot at the right for maximum performance corresponding to the highest load level. The intermediate load levels, if defined, are represented by small vertical bars.
- Efficiency Score
- The rightmost section displays the range of efficiency scores for all load levels. The efficiency scores are given on vertical scales at the top of each workload section. The efficiency scores are calculated separately for each load level by dividing the normalized-performance by the average-watts for that interval. The total efficiency score of a worklet is defined as the geometric mean of the efficiency scores of all its intervals (see also Worklet Efficiency Score). The efficiency score range is represented by a green line with a triangle at the left for minimum efficiency typically corresponding to the lowest load level and a triangle at the right for maximum efficiency typically corresponding to the highest load level. The intermediate load levels, if defined, are represented by small vertical bars.
8.2 Result Table
The result table numerically displays the power, performance and efficiency scores for the different worklets. Each worklet is presented on a separate row.
8.2.1 Workload
This column of the result table shows the names of the workloads. A workload may include one or more worklets.
8.2.2 Worklet
This column of the result table shows the names of the worklets the following values are belonging to.
8.2.3 Normalized Peak Performance
In order to get performance values in the same order of magnitude from all worklets the individual performance scores of each worklet (see Score) are divided by a fixed reference score. The reference score for each worklet was determined taking the average performance score over several SERT test runs on a well defined reference configuration under different operating systems.
The fields in this column show the normalized performance for the Peak Interval of each worklet only. The values are calculated by taking the Peak Interval performance score and dividing it by the corresponding reference score for that worklet.
8.2.4 Watts at Lowest Load Level
This column of the result table shows the worklet power readings at the lowest load level. Please note that this does not correspond to "Idle", which is implemented as a separate workload in SERT and not included in each worklet.
8.2.5 Watts at Highest Load Level
This column of the result table shows the worklet power readings at the highest load level, or "100%" load.
8.2.6 Geometric Mean of Normalized Performance
This column was removed in SERT V2.0.1. Detailed performance information for all worklet measurement intervals is now listed in the worklet specific Efficiency Scores tables (see Normalized Performance Score).
The worklet performance score is calculated as the geometric mean of the normalized performance scores over all measurement intervals of a worklet.
In order to get performance values in the same order of magnitude from all worklets the individual performance scores for each measurement interval of all worklets (see Score), are divided by a fixed reference score. The reference score for each worklet was determined taking the average performance score over several SERT test runs on a well defined reference configuration under different operating systems.
A detailed description of the normalization process is given in chapter 5.1 "The SERT Efficiency Metric" of the SERT Design Document.
8.2.7 Geometric Mean of Power (Watts)
This column was removed in SERT V2.0.1. Detailed power information for all worklet measurement intervals is now listed in the worklet specific Efficiency Scores tables (see Average Active Power (W)).
The geometric mean of the average-watts over all measurement intervals (see Average Active Power (W) in the "Worklet Performance and Power Details" section).
8.2.8 Worklet Efficiency Score
The following formula is used to calculate the worklet efficiency scores for the current SERT version.
Worklet Efficiency Score = 1000 * Geometric Mean (EfficiencyInterval 1...n)
Efficiency for the Idle worklet is marked as not applicable (n/a) because the performance part is missing by definition.
Please note that Idle power is NOT included in the per worklet efficiency score calculation.
9. Measurement Devices
This report section is available in the Details Report File "results-details.html/.txt" only. It shows the details of the different measurement devices used for this test run.
There may be more than one measurement device used to measure power and temperature. Each of them will be described in a separate table.
9.1 Power Analyzer "Name"
The following table includes information about the power analyzer identified by "Name" and used to measure the electrical data.
9.1.1 Hardware Vendor
Company which manufactures and/or sells the power analyzer.
<SystemUnderTest><MeasurementDevices><PowerAnalyzer><HardwareVendor>
9.1.2 Model
The model name of the power analyzer type used for this test run.
<SystemUnderTest><MeasurementDevices><PowerAnalyzer><Model>
9.1.3 Serial Number
The serial number uniquely identifying the power analyzer used for this test
run.
<SystemUnderTest><MeasurementDevices><PowerAnalyzer><SerialNumber>
9.1.4 Connectivity
Which interface was used to connect the power analyzer to the PTDaemon host
system and to read the power data, e.g. RS-232 (serial port), USB, GPIB,
etc.
<SystemUnderTest><MeasurementDevices><PowerAnalyzer><Connectivity>
9.1.5 Input Connection
Input connection used to connect the load, if several options are available,
or "Default" if not.
<SystemUnderTest><MeasurementDevices><PowerAnalyzer><InputConnection>
9.1.6 Metrology Institute
Name of the national metrology institute, which specifies the calibration
standards for power analyzers, appropriate for the
Test Location reported in the result files.
<SystemUnderTest><MeasurementDevices><PowerAnalyzer><CalibrationInstitute>
Calibration should be done according to the standard of the country where the test was performed or where the power analyzer was manufactured.
Examples:
Country | Metrology Institute |
---|---|
USA | NIST (National Institute of Standards and Technology) |
Germany | PTB (Physikalisch Technische Bundesanstalt) |
Japan | AIST (National Institute of Advanced Industrial Science and Technology) |
Taiwan (ROC) | NML (National Measurement Laboratory) |
China | CNAS (China National Accreditation Service for Conformity Assessment) |
A list of national metrology institutes for many countries is maintained by NIST at http://gsi.nist.gov/global/index.cfm.
9.1.7 Calibration Laboratory
Name of the organization that performed the power analyzer calibration
according to the standards defined by the national metrology institute.
This could be the analyzer manufacturer, a third party company, or an
organization within your own company.
<SystemUnderTest><MeasurementDevices><PowerAnalyzer><AccreditedBy>
9.1.8 Calibration Label
A number or character string which uniquely identifies this meter
calibration event. May appear on the calibration certificate or on a sticker
applied to the power analyzer. The format of this number is specified by the
organization performing the calibration.
<SystemUnderTest><MeasurementDevices><PowerAnalyzer><CalibrationLabel>
9.1.9 Date of Calibration
The date (yyyy-mm-dd) the calibration certificate was issued, from the
calibration label or the calibration certificate.
<SystemUnderTest><MeasurementDevices><PowerAnalyzer><DateOfCalibration>
9.1.10 PTDaemon Version
The version of the power daemon program reading the analyzer data, including CRC information to verify that the released version was running unchanged. This information is provided automatically by the test software.
9.1.11 Setup Description
Free format textual description of the device or devices measured by this
power analyzer and the accompanying PTDaemon instance, e.g. "SUT Power
Supplies 1 and 2".
<SystemUnderTest><MeasurementDevices><PowerAnalyzer><SetupDescription>
9.2 Temperature Sensor
The following table includes information about the temperature sensor used to measure the ambient temperature of the test environment.
9.2.1 Hardware Vendor
Company which manufactures and/or sells the temperature sensor.
<SystemUnderTest><MeasurementDevices><TemperatureSensor><HardwareVendor>
9.2.2 Model
The manufacturer and model name of the temperature sensor used for this
test run.
<SystemUnderTest><MeasurementDevices><TemperatureSensor><Model>
9.2.3 Driver Version
The version number of the operating system driver used to control and read
the temperature sensor.
<SystemUnderTest><MeasurementDevices><TemperatureSensor><DriverVersion>
9.2.4 Connectivity
Which interface was used to read the temperature data from the sensor, e.g.
RS-232 (serial port), USB, etc.
<SystemUnderTest><MeasurementDevices><TemperatureSensor><Connectivity>
9.2.5 PTDaemon Version
The version of the power daemon program reading the analyzer data, including CRC information to verify that the released version was running unchanged. This information is provided automatically by the test software.
9.2.6 Sensor Placement
Free format textual description of the device or devices measured and the
approximate location of this temperature sensor, e.g. "50 mm in front
of SUT main airflow intake".
<SystemUnderTest><MeasurementDevices><TemperatureSensor><SetupDescription>
10. Aggregate Electrical and Environmental Data
The following section displays more details of the electrical and environmental data collected during the different target loads, including data not used to calculate the test result. For further explanation of the measured values look in the "SPECpower Methodology" document (SPECpower-Power_and_Performance_Methodology.pdf).
10.1 Line Standard
Description of the line standards for the main AC power as provided by the
local utility company and used to power the SUT. The standard voltage and
frequency are printed in this field followed by the number of phases and
wires used to connect the SUT to the AC power line.
<SystemUnderTest><LineStandard><Voltage>
<SystemUnderTest><LineStandard><Frequency>
<SystemUnderTest><LineStandard><Phase>
<SystemUnderTest><LineStandard><Wires>
10.2 Elevation (m)
Elevation of the location where the test was run. This information is
provided by the tester.
<SystemUnderTest><TestInformation><ElevationMeters>
10.3 Minimum Temperature (°C)
Minimum temperature which was measured by the temperature sensor during all target load levels.
11. Worklet Performance and Power Details
This report section is available in the Details Report File "results-details.html/.txt" only. It is divided into separate segments for all worklets each starting with a title bar showing the workload and worklet names <workload name>: <worklet name>. Each segment includes Performance and Power Data tables together with some details about the client JVMs for the corresponding worklet.
11.1 Total Clients
Total number of client JVMs started on the System Under Test for this worklet.
Beginning with SERT V1.0.2 this number is calculated by the SERT code based on the information specified in the "client-configurations-NNN.xml" file. See Client Configuration ID for more details.
11.2 CPU Threads per Client
The number of hardware threads each instance of the client JVM is affinitized to.
11.3 Sample Client Command-line
The complete command-line for one of the client JVMs used to run this worklet, including affinity specification, the Java classpath, the JVM tuning flags and additional SERT parameters. The affinity mask, the "Client N of M" string and the "-jvmid N" parameter printed here are valid for one specific instance of the client JVM only. The other client JVMs will use their associated affinity masks, strings and parameters but share the rest of the commandline.
Beginning with SERT V1.0.2 the JVM tuning flags are included automatically by the SERT code based on the information specified in the "client-configurations-NNN.xml" file. See Client Configuration ID for more details.
11.4 Efficiency scores
This table was introduced in SERT V2.0.1.
The table presents detailed performance scores and power data for each measurement interval of this worklet together with the corresponding interval efficiency scores. In the last row of the table the aggregate performance and power values plus the worklet efficiency score calculated from them are displayed.
11.4.1 Load Level
The target load level percentage for each measurement interval of this worklet
is printed in a separate row.
Note: Only the measurement intervals are listed here.
The name of the current worklet is repeated in the bottom field of this column.
11.4.2 Raw Performance Score
In this column the raw (measured) performance score for each measurement target load level is printed.
These fields correspond to the measurement fields of the Score column
in the Performance Data table below.
11.4.3 Normalized Performance Score
The normalized performance score values for each measurement load level are calculated by taking the interval Raw Performance Score and dividing it by the corresponding reference score for that worklet interval. The reference score for each worklet interval was determined taking the average performance score over several SERT test runs on a well defined reference configuration under different operating systems.
The bottom field shows the geometric mean calculated from the normalized performance scores of all measurement load levels above.
11.4.4 Average Active Power (W)
Average active power in Watts for each measurement interval as measured by the power analyzer(s) connected to the system under test and reported by the corresponding PTDaemon instance(s).
These fields typically correspond to the measurement fields of the Average Active Power (W) column in the Power Data table below.
The bottom field shows the geometric mean calculated from the average active power values of all measurement load levels above.
11.4.5 Load Level Efficiency Score
Each field of this column displays the interval efficiency score for the given target load level calculated from the Normalized Performance Score divided by the corresponding Average Active Power (W) value times 1000.
The bottom field shows the Worklet efficiency score calculated as the geometric mean of the efficiency scores of all its intervals (see also Worklet Efficiency Score).
11.5 Performance Data
This table displays detailed performance information for a worklet. The information is presented on separate rows per Phase, Interval and Transaction where applicable.
11.5.1 Phase
This column of the performance data table shows the phase names for the performance values presented in the following columns of the rows belonging to this phase.
Examples of phases are "Warmup", "Calibration" and "Measurement".
11.5.2 Interval
This column of the performance data table shows the interval names for the performance values presented in the following columns of the rows belonging to this interval.
Examples of intervals are "max" and "75%".
11.5.3 Actual Load
The "Actual Load" is calculated by dividing the interval "Score" by the "Calibration Result". This value is shown for the measurement intervals only. It can be compared against the target load level as defined by the "Interval" name.
11.5.4 Score
The fields in this column show the worklet specific score for each interval, which is calculated dividing the sum of all "Transaction Count" values for this interval by the "Elapsed Measurement Time (s)". For the calibration intervals a field showing the "Calibration Result" score is added in this column.
In contrast to the worklet performance score given in the performance data table Result Table above this score isn't normalized.
Note that elapsed interval time is the wall clock time specified in the SERT configuration file "config-all.xml" -- not the Transaction Time from the report.
The memory worklets (Flood3 and Capacity3) have their own score calculations that include other factors, e.g. memory capacity. A detailed description is given in the SERT Design Document https://www.spec.org/sert/SERT-designdocument.pdf.
With SERT release 2.0.0 the algorithm to calculate memory worklet scores has changed again. Therefore memory worklet scores from previous SERT releases are no longer comparable to those from SERT 2.0.0 and newer releases.
11.5.5 Host CV
This field was introduced in SERT V1.1.0.
The host CV (coefficient of variation) is calculated from all individual node performance "Scores" (Score) for the whole set of nodes (population). Individual CVs are shown in this column for each interval. Real values are calculated and displayed for multi node runs only. For single node runs "0.0%" is printed in all fields of this column.
The coefficient of variation (CV) is defined as the ratio of the standard deviation σ to the mean μ:
cv = σ ⁄ μ
It represents the extent of variability in relation to mean of the population. The calculated values are presented as percentage numbers in the SERT report files.
The Host CV is used by SERT to identify significant deviance of results of some nodes from the mean. Similar results per node are expected for a homogeneous set of nodes, i.e. the CV should be fairly small. High CV values indicate greatly different results for one or more nodes. CV values above certain thresholds will cause warnings or error messages being printed in the SERT result files.
11.5.6 Client CV
This field was introduced in SERT V1.1.0.
The Client CV (coefficient of variation) is calculated from all individual client performance "Scores" (Score) for the whole set of clients (JVM instances) including all nodes for multi node runs (population). Individual CVs are shown in this column for each interval.
The coefficient of variation (CV) is defined as the ratio of the standard deviation σ to the mean μ:
cv = σ ⁄ μ
It represents the extent of variability in relation to mean of the population. The calculated values are presented as percentage numbers in the SERT report files.
The Client CV is used by SERT to identify significant deviance of results of some clients from the mean. Typically similar results per client are expected, i.e. the CV should be fairly small. High CV values indicate greatly different results for one or more clients. For unbalanced configurations, e.g. different types of storage devices (HDD and SSD) or dissimilar amounts of memory attached to different processors, high CV values are "normal". CV values above certain thresholds will cause warnings or error messages being printed in the SERT result files. Warnings can be ignored for unbalanced configurations as high CV values are expected in this case.
11.5.7 Elapsed Measurement Time (s)
The time spent during this interval executing transactions. This is the time used for calculating the "Score" for this interval.
This time typically doesn't match exactly the interval time specified in the SERT configuration file "config-all.xml".
11.5.8 Transaction
The name of the transaction(s) related to the following "Transaction Count" and "Transaction Time" values.
Some worklets execute only one type of transaction whereas others, e.g. CPU: SSJ, run several different transactions. There will be a separate line for each transaction.
11.5.9 Transaction Count
The number of successfully completed transactions defined in column "Transaction" during the interval given in column "Interval".
For worklets including multiple transaction types a "sum" field is added, showing the aggregated transaction count for all transactions of this worklet.
11.5.10 Transaction Time (s)
The total elapsed (wall clock) time spent executing this transaction during this interval. It only includes the actual execution time, and not input generation time. Since multiple transactions execute concurrently in different threads, this time may be longer than the length of the interval.
For worklets including multiple transaction types a "sum" field is added, showing the aggregated transaction time for all transactions of this worklet.
11.6 Power Data
This table displays detailed power information for a worklet. The information is presented on separate rows per Phase and Interval. There will be separate power data tables for each power analyzer.
11.6.1 Phase
This column of the power data table gives the phase names for the power values presented in the following fields of the rows belonging to this phase.
The "Sum" field identifies the summary row for all measurement intervals (see Average Active Power (W)).
11.6.2 Interval
This column of the power data table gives the interval names for the power values presented in the following fields of the rows belonging to this interval.
The "Total" field identifies the summary row for all power analyzers (see Average Active Power (W)).
11.6.3 Analyzer
Name identifying the power analyzer whose power readings are displayed in this table. More details regarding this power analyzer are given in the Power Analyzer table(s) in the "Measurement Devices" section above.
11.6.4 Average Voltage (V)
Average voltage in Volts for each interval as reported by the PTDaemon instance connected to this power analyzer.
11.6.5 Average Current (A)
Average current in Amps for each interval as reported by the PTDaemon instance connected to this power analyzer.
11.6.6 Current Range Setting
The current range for each test phase as configured in the power analyzer. Typically range settings are read by PTDaemon directly from the power analyzer.
Please note that automatic current range setting by the analyzer is not allowed for all currently accepted analyzers and will invalidate the result.
11.6.7 Average Power Factor
Average power factor for each interval as reported by the PTDaemon instance connected to this power analyzer.
11.6.8 Average Active Power (W)
Average active power in Watts for each interval as reported by the PTDaemon instance connected to this power analyzer.
In this column a "Sum""Total" field is added, showing the aggregated active power for all measurement intervals over all power analyzers.
11.6.9 Power Measurement Uncertainty (%)
The average uncertainty of the reported power readings for each test phase as calculated by PTDaemon based on the range settings. The value must be within the 1% limit defined in section "1.20.1 Power Analyzer Requirements" of the SERT Run and Reporting Rules document.
For some analyzers range reading may not be supported. The uncertainty calculation may still be possible based on manual or command line range settings. More details are given in the measurement setup guide (see SPECpower_Measurement_Setup_Guide.pdf).
11.6.10 Minimum Temperature (°C)
The minimum ambient temperature for each interval as measured by the temperature sensor. All values are measured in ten second intervals, evaluated by the PTDaemon and reported to the test harness at the end of each interval.
Copyright © 2006 - 2017 Standard Performance Evaluation Corporation
All Rights Reserved