Version 2.9, Last modified: September 30, 2003
2.3.1 Definitions of Terms
2.3.3 Error Reporting
3.2.1 SUT Hardware
3.2.2 SUT Software
3.2.3 Network Configuration
3.2.6 Test Sponsor
3.3 Log File Review
This document specifies how SPECweb99 is to be run for measuring and publicly reporting performance results. These rules abide by the norms laid down by the SPEC Web Subcommittee and approved by the SPEC Open Systems Steering Committee. This ensures that results generated with this suite are meaningful, comparable to other generated results, and are repeatable (with documentation covering factors pertinent to duplicating the results).
Per the SPEC license agreement, all results publicly disclosed must adhere to these Run and Reporting Rules.
The general philosophy behind the rules of SPECweb99 is to ensure that an independent party can reproduce the reported results.
The following attributes are expected:
Furthermore, SPEC expects that any public use of results from this benchmark suite shall be for System Under Test (SUT) and configurations that are appropriate for public consumption and comparison. Thus, it is also expected that:
When competitive comparisons are made using SPECweb99 benchmark results, SPEC expects that the following template be used:
SPECweb®99 is a trademark of the Standard Performance Evaluation Corp. (SPEC).
Competitive numbers shown reflect results published on www.spec.org as of (date).
[The comparison presented is based on (basis for comparison).] For the latest
SPECweb99 results visit http://www.spec.org/osg/web99.
(Note: [...] above required only if selective comparisons are used.)
SPECweb®99 is a trademark of the Standard Performance Evaluation Corp. (SPEC). Competitive numbers shown reflect results published on www.spec.org as of Jan 12, 2000. The comparison presented is based on best performing 4-cpu servers currently shipping by Vendor 1, Vendor 2 and Vendor 3. For the latest SPECweb99 results visit http://www.spec.org/osg/web99.
The rationale for the template is to provide fair comparisons, by ensuring
SPEC reserves the right to adapt the benchmark codes, workloads, and rules of SPECweb99 Release 1.0 as deemed necessary to preserve the goal of fair benchmarking. SPEC will notify members and licensees whenever it makes changes to the suite and will rename the metrics (e.g. from SPECweb99 to SPECweb99a).
Relevant standards are cited in these run rules as URL references, and are current as of the date of publication. Changes or updates to these referenced documents or URL's may necessitate repairs to the links and/or amendment of the run rules. The most current run rules will be available at the SPEC web site at http://www.spec.org. SPEC will notify members and licensees whenever it makes changes to the suite.
As the WWW is defined by its interoperative protocol definitions, SPECweb99 requires adherence to the relevant protocol standards. It is expected that the web server is HTTP1.0 and/or HTTP 1.1 compliant. The benchmark environment shall be governed by the following standards:
Internet standards are evolving standards. Adherence to related RFC's (e.g. RFC 1191 Path MTU Discovery) is also acceptable, provided the implementation retains the characteristic of interoperability with other implementations.
In addition to adherence to the above standards, SPEC requires the SUT to support:
SPEC requires that any HTTP 1.1 server used must also support HTTP 1.0 as outlined in RFC 2616 Section 19.6. SPEC believes that HTTP 1.0 requests will continue to be a significant part of the traffic seen by ISP servers and has incorporated HTTP 1.0 requests as a portion of the SPECweb99 workload (See: Section 5 of the SPECweb99 Design Document).
For further explanation of these protocols, the following might be helpful:
These requirements apply to all hardware and software components used in producing the benchmark result, including the SUT, network, and clients.
Rationale: SPEC intends to follow relevant standards wherever practical, but with respect to this performance sensitive parameter it is difficult due to ambiguity in the standards. RFC1122 requires that TIME_WAIT be 2 times the maximum segment life (MSL) and RFC793 suggests a value of 2 minutes for MSL. So TIME_WAIT itself is effectively not limited by the standards. However, current TCP/IP implementations define a de facto lower limit for TIME_WAIT of 60 seconds, the value used in most BSD derived UNIX implementations.
For a run to be valid, the following attributes must hold true:
On those systems that do not dynamically allocate TIME_WAIT table entries, the appropriate system parameter should be configured to at least 1.05 * TIME_WAIT * 3.28 * Requested_Connections to ensure they can maintain all the connections in TIME_WAIT state. (See the benchmark white paper for derivation of this formula.) SPEC expects that the protocol standards relating to TIME_WAIT will be clarified in time, and that future releases of SPECweb99 will require strict conformance with those standards.
The SPECweb99 metric represents the actual number of simultaneous connections that a server can support. In the benchmark, a number of simultaneous connections are requested. For each simultaneous connection requested during an iteration, a thread or process is started to generate the benchmark workload against the server. These threads or processes are referred to as load generators and are used to make HTTP requests to the SUT according to the predefined workload. The load generators run on one or more client systems.
A simultaneous connection is considered conforming to the required bit rate if its aggregate bit rate is more than 320,000 bits/second, or 40,000 bytes/second. If a simultaneous connection does not conform to this minimum bit rate, or its aggregate bit rate falls below 320,000 bits/second, it is not counted in the metric.
Also, no result is considered valid whose "Actual Mix" versus "Target Mix" percentages differs by more than 10% of the "Target Mix" for any workload class. E.g., if the target mix percent is 0.35 then valid actual mix percentages are 0.35 +/- 0.035.
In addition, the URL retrievals (or operations) performed must also meet the following quality criteria:
The particular files referenced shall be determined by the workload generation in the benchmark itself.
The size of the fileset generated on the SUT by the benchmark is established as a function of number of requested connections. This provides a more realistic web server load since more files are being manipulated on the server as the load is increased. This reflects typical web server use in real-world environments.
The formula for the number of directories that are created is:
directory_count = (25 + (((400000.0 / 122000.0) * simultaneous_connections) / 5.0))
The benchmark suite provides tools for the creation of the file set to be used. It is the responsibility of the benchmarker to ensure that these files are placed on the SUT so that they can be accessed properly by the benchmark. These files, and only these files shall be used as the target file set. The benchmark performs internal validations to verify the expected results. No modification or bypassing of this validation is allowed.
There are three classifications of benchmark parameters which are defined in detail in the "rc" file and the User's Guide supplied with SPECweb99:
The list below includes the benchmark parameters that have specific settings required by these run rules. Most are benchmark constants with the exception of WARMUP_TIME which has a specified minimum setting:
Test timing parameters:
Workload Mix parameters:
Other Workload parameters:
Any change to the above benchmark constants will produce an "invalid" result for that test. This means that results generated using non-default values for these constants can not be reported as "SPECweb99" results.
SPECweb99 includes measurements of "dynamic" web content. The dynamic code executed is supplied by the benchmarker running the test. SPECweb99 has four types of dynamic operations: Standard Dynamic GET, Dynamic GET with Custom Ad Rotation, Dynamic POST, and Standard Dynamic CGI GET. In addition, there are some "housekeeping" commands that are initiated using a dynamic GET.
The pseudo code specification provided is a description of the actual work that needs to be implemented. This is required so that SPECweb99 results will be comparable. Any dynamic implementation must follow the specification exactly. This means that all operations specified, such as loops and searches, should be executed for each request. Unless otherwise specified, results or intermediate results from previous operations or requests should not be cached.
To provide the flexibility needed to implement this code on any platform and in any desired programming language or API, the subroutines listed in the pseudo code may be inlined or subdivided into smaller subroutines as long as the algorithms implemented by the subroutines are performed exactly as described.
The non-CGI dynamic operations may be executed by separate dynamic modules or may be combined into fewer modules. The dynamic code may be written in any user-mode API. A sample CGI implementation in Perl is supplied with the kit.
SPEC requires that code implementing the non-CGI dynamic operations runs in user mode. The rationale is that most ISPs are expected to work in this mode for reliability reasons in multi-user environments.
Note: Directories are specified in this document with a forward slash, '/', however, this in no way implies that one has to follow this convention. Use what works on one's operating system.
All of the dynamic requests return one of the following types of HTML pages to the client. These formats must be followed precisely for the client software to understand the returned pages. This includes the blank line between the headers and the <html> tag.
Square brackets, , are used to denote where appropriate text should be substituted. This text should contain only the required information. The text may NOT be padded in any way to create a fixed length field.
Extra headers required by the web server are allowed. The formats simply show the minimum required by the SPECweb99 benchmark.
HTTP 200 OK Content-type: text/html Content-Length: [length of all return text - excluding headers] <html> <head><title>SPECweb99 Dynamic GET & POST Test</title></head> <body> <p>SERVER_SOFTWARE = [ServerSoftware] <p>REMOTE_ADDR = [RemoteAddr] <p>SCRIPT_NAME = [ScriptName] <p>QUERY_STRING = [QueryString] <pre> [Contents of file FileName or buffer FileBuffer] </pre> </body></html>
HTTP 200 OK Content-type: text/html Content-Length: [length of all return text - excluding headers] Set-Cookie: [CookieString] <html> <head><title>SPECweb99 Dynamic GET & POST Test</title></head> <body> <p>SERVER_SOFTWARE = [ServerSoftware] <p>REMOTE_ADDR = [RemoteAddr] <p>SCRIPT_NAME = [ScriptName] <p>QUERY_STRING = [QueryString] <pre> [Contents of file FileName] </pre> </body></html>
HTTP 200 OK Content-type: text/html Content-Length: [length of all return text - excluding headers] <html> <head><title>SPECweb99 Dynamic GET & POST Test</title></head> <body> <p>SERVER_SOFTWARE = [ServerSoftware] <p>REMOTE_ADDR = [RemoteAddr] <p>SCRIPT_NAME = [ScriptName] <p>QUERY_STRING = [QueryString] <pre> [MessageText] </pre> </body></html>
The dynamic code must handle errors by returning an HTML page with an error message in it, using the Return HTML Page with Message format. The pseudo-code for each request contains examples of errors that might be reported and how this reporting should be done.
Errors in the Dynamic GET with Custom Ad Rotation must also set the AdId in the cookie string to a negative number.
The Standard Dynamic GET and Standard Dynamic CGI GET requests simulate simple ad rotation on a commercial web server. Many web servers use dynamic scripts to generate content for ads on web pages "on the fly", so that ad space can be sold to different customers and rotated in real time.
The Standard Dynamic CGI GET must be implemented to conform to Common Gateway Interface version 1.1 (CGI/1.1). A Perl CGI implementation is included in the SPECweb99 kit, however, any CGI conformant code may be used. Only the Standard Dynamic CGI GET must have a non-persistent CGI implementation. Use of CGI and Fast-CGI accelerators, which do not create a new process within the SUT for each CGI invocation, are explicitly forbidden (e.g. fork for Unix and CreateProcess for NT).
The Standard Dynamic GET code is used for two "housekeeping" functions needed during the benchmark, Reset and Fetch. The pseudo-code for these operations is described in the Housekeeping Pseudo-code section.
Begin: Parse QueryString If QuerySrting == "command/..." goto Housekeeping Pseudo-code Endif Make substitutions in HTML return page for the following: Server_Software Remote_Addr Script_Name QueryString Access file 'RootDir/QueryString' If error found while accessing file then Return HTML Page with Message = error_message Else Return HTML Page with File = 'RootDir/QueryString' Endif End
GET /specweb99/specweb99-GET.dll?/file_set/dir00001/class2_3 GET /specweb99/cgi/specweb99.pl?/mydir/file_set/dir00123/class1_0 HTTP/1.1
The SPECweb99 benchmark uses 2 housekeeping functions to run the test. They are invoked using a dynamic GET. The code to implement them may be in a separate module from any other code for the benchmark. Before the test, a Reset function is invoked to clear/reset some files. After the test, a Fetch function is used to retrieve the PostLog.
Begin: Parse QueryString Make substitutions in HTML return page for the following: Server_Software Remote_Addr Script_Name QueryString If input == "command/Fetch" then Return HTML with file = PostLog If error found while accessing PostLog then Return HTML Page with Message = error_message Elseif input == "command/Reset&[Args]" then Parse Args into Maxload, PointTime, MaxThreads, ExpiredList, UrlRoot The format is as follows: command/Reset&maxload=[MaxLoad]&pttime=[PointTime]&maxthre\ ads=[MaxThreads]&exp=[ExpiredList]&urlroot=[UrlRoot] Call the program "upfgen99" as follows: upfgen99 -n [MaxLoad] -t [MaxThreads] -C [RootDir] Call the program "cadgen99" as follows: cadgen99 -C [RootDir] -e [PointTime] -t [MaxThreads] [ExpiredList] Reset PostLog to Initial State
If any errors found then Return HTML Page with Message = error_message Endif Return HTML Page with Message = "" Endif
GET /specweb99/specweb99-GET.dll?command/Fetch GET /specweb99/cgi/specweb99.pl?command/Reset&maxload=500&pttime=89898\ 989&maxthread=300&exp=30,190&urlroot=http://www.myserver.com/specweb99
The Dynamic GET with Custom Ad Rotation scheme models the tracking of users and their preferences to provide customized ads for their viewing. In the SPECweb99 implementation, a user's ID number is passed as a Cookie along with the ID number of the last ad seen by that user. The user's User Personality record is retrieved and compared against demographic data for ads in the Custom Ad database, starting at the record after the last ad seen. When a suitable match is found, the ad data is returned in a Cookie.
In addition to the Cookies, the request contains a filename to return. Depending on the name of the file to return, it is either returned as is (just like the Standard Dynamic GET request) or it is scanned for a "template" string and returned with the template filled in with customized information.
The symbol '&' is the bit-wise AND operator. The back-slash, '\' is used as a line continuation character in strings. It is not actually part of the string.
Note: Errors in handling the User.Personality and Custom.Ad files should be reported by setting the AdId to a negative number in the CookieString. These errors are not shown in the pseudo-code, but should be handled appropriately. In addition, an error message may be sent using the Return HTML Page with Message = error_message format.
Begin: Make substitutions in HTML return page for the following: Server_Software Remote_Addr Script_Name QueryString FileName = 'RootDir/QueryString' Parse Cookie string into MyUser and Last_Ad. The format of the cookie is as follows (the order of keys and values is fixed): my_cookie=user_id=[MyUser]&last_ad=[Last_ad] Calculate UserIndex into User.Personality file UserIndex = MyUser - 10000 Find User.Personality record using UserIndex If no matching record is found CookieString = "found_cookie=Ad_Id=-1&Ad_weight=00&Expired=1" If FileName contains string "class1" or "class2" FileBuffer = CustomAdScan Subroutine (FileName, AdId) Return HTML Page with File=FileBuffer and Cookie=CookieString Else Return HTML Page with File=FileName and Cookie=CookieString Endif Endif Set Ad_index = Last_ad + 1 If Ad_index > 359 then Ad_index = 0 Endif Do For Each Ad in Custom.Ads starting where Ad_index == Ad_id Retrieve the record Parse Custom.Ads record into AdDemographics, Weightings, Minimum_Match_Value, Expiration_Date CombinedDemographics = ( AdDemographics & UserDemographics ) Ad_weight = 0 If ( CombinedDemographics & GENDER_MASK ) then Ad_weight = Ad_weight + Gender_wt Endif If ( CombinedDemographics & AGE_GROUP_MASK ) then Ad_weight = Ad_weight + Age_group_wt Endif If ( CombinedDemographics & REGION_MASK ) then Ad_weight = Ad_weight + Region_wt Endif If ( CombinedDemographics & INTEREST1_MASK ) then Ad_weight = Ad_weight + Interest1_wt Endif If ( CombinedDemographics & INTEREST2_MASK ) then Ad_weight = Ad_weight + Interest2_wt Endif If ( Ad_weight >= Minimum_Match_Value ) then If current_time > Expiration_Date then Expired = True (1) Else Expired = False (0) Endif CookieString = "found_cookie=Ad_id=<Ad_index>&Ad_weig\ ht=<Ad_weight>&Expired=<Expired>" If FileName contains string "class1" or "class2" FileBuffer = CustomAdScan Subroutine (FileName, AdId) Return HTML Page with File=FileBuffer and Cookie=CookieString Else Return HTML Page with File=FileName and Cookie=CookieString Endif Endif Ad_index = Ad_index + 1 If Ad_index > 359 then Ad_index = 0 Endif Continue Processing Custom.Ads records until Ad_index = Last_ad Enddo CookieString = "found_cookie=Ad_id=<Ad_index>&Ad_weight=<Ad_weigh\ t>&Expired=<Expired>" If FileName contains string "class1" or "class2" FileBuffer = CustomAdScan Subroutine (FileName, AdId) Return HTML Page with File=FileBuffer and Cookie=CookieString Else Return HTML Page with File=FileName and Cookie=CookieString Endif End Begin Subroutine CustomAdScan (FileName, AdId) Read File=FileName into FileBuffer Do until End of FileBuffer Find String <!WEB99CAD><IMG SRC="/file_set/dirNNNNN/classX_Y"><!/WEB99CAD> Replace string NNNNN with (Ad_Id / 36) padded to 5 digits Replace string X with ((Ad_Id % 36) / 9) Replace string Y with (Ad_Id % 9) Enddo Return FileBuffer End
Note 1: The entire file must be scanned on each Custom Ad Scan request, and
the results of that scan used for the string replacement.
Note 2: The minimum search string to be matched is <!WEB99CAD><IMG SRC="/file_set/dir
The User.Personality file contains demographic information about each user. It is created/recreated by the Reset function of the Standard GET and contains randomly generated data. It is not altered during a test point. The file is written in ASCII for readability, but is relatively small, so can be buffered in memory in any format. The file may change during the test, so the dynamic code must check for this and update any in-memory structures. The file is in numerical order by User_id, starting at 0, with no holes in the sequence. The maximum User_id is determined at Reset time by the maxload input to the Reset command.
Each User.Personality record contains a User_id and a UserDemographics field. The printf format string is:
"%5d %8X\n", User_id, UserDemographics
0 18880200 1 24440100 2 18808020 3 24400401 4 18840200
The Demographics structure is a 32-bit integer and contains the following fields:
Unused Gender Age_Group Region Interest_1 +---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+ High | | | | | Y | A | B | E | | | | | 0 | 1 | 2 | 3 | | 0 | 0 | M | F | | | | | N | S | E | W | | | | | +---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+ (...Interest_1) Interest_2 +---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+ Low | 4 | 5 | 6 | 7 | 8 | 9 | | | | | | | | | | | | | | | | | | 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | +---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+
The Custom.Ads file contains 360 ads, each with a demographic profile, demographic field weightings and an expiration time. Like the User Personality file, it can be buffered in memory, however, the file may change during the test, so the dynamic code must check for this and update any in-memory structures. The file is in numerical order by Ad_id, starting at 0, with no holes in the sequence. The maximum Ad_id is 359.
Each Custom.Ads record contains an Ad_id, AdDemographics, Weightings, Minimum_Match_Value, and Expiration_Time field. The printf format string is:
"%5d %8X %8X %3d %10d\n", Ad_id, AdDemographics, Weightings, Minimum_Match_Value, Expiration_Time
0 18820080 9048090 41 870365784 1 18140200 5F5E100 46 870365784 2 21408040 3887569 49 870365784 3 12101008 1BC9DE4 43 838741584 4 24480001 925A71 37 870365784
The AdDemographics is the same structure described for the User Personality file. The Weightings structure is a 32-bit integer containing the following data:
Unused Unused Unused Gender_wt +---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+ High | 0 | 1 | 2 | 3 | | | | | 0 | 1 | 2 | 3 | | | | | | | | | | 0 | 1 | 2 | 3 | | | | | 0 | 1 | 2 | 3 | +---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+ Age_Group_wt Region_wt Interest_1_wt Interest_2_wt +---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+ Low | 0 | 1 | 2 | 3 | | | | | 0 | 1 | 2 | 3 | | | | | | | | | | 0 | 1 | 2 | 3 | | | | | 0 | 1 | 2 | 3 | +---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+
GET /specweb99/specweb99-GET.dll?/file_set/dir00001/class2_3 HTTP/1.1 Cookie: my_cookie=user_id=10012&last_ad=120 GET /specweb99/cgi/specweb99.pl?/file_set/dir00001/class2_3 HTTP/1.1 Cookie: my_cookie=user_id=10012&last_ad=120
The Dynamic POST request models user registration at an ISP site. In the implementation, the POST data and some other information is written to a single ASCII file, the PostLog.
It is the intent of SPEC that the implementation of the dynamic POST functionality be such that a post operation can be validated at any time during the benchmark run by any client issuing a dynamic request subsequent to the SUT's completion of the response to that POST request.
All POST requests contain a Cookie value that is written into the post log and also sent back to the requester with a Set-Cookie header.
Begin: Make substitutions in HTML return page for the following: Server_Software Remote_Addr Script_Name QueryString Parse PostInput - a sample format is as follows (keys may be received in any order): urlroot=[urlroot]&dir=[Dir#]&class=[Class#]&num=[File#]&client=[Client#] If PostInput has incorrect format then Return HTML Page with Message = error_message Endif Parse Cookie string to get MyCookie. The format is as follows (the order of the keys and values is fixed): my_cookie=user_id=[MyCookie]&last_ad=[IgnoredField] Filename = [urlroot]/dir[5-digit Dir#]/class[Class#]_[File#] (for example, the POST input of urlroot=/specweb99/file_set&dir=00123&class=1&num=1&client=10003 would make Filename = /specweb99/file_set/dir00123/class1_1) Get Current Timestamp Get Process or Thread ID Do_atomically (for example, using a file lock or other mutex): Increment PostLog RecordCount record and rewrite PostLog Append new PostLog Record to end of PostLog (refer to Post Log Format section to see required format) End Do_atomically If writing PostLog gets errors Return HTML Page with Message = error_message Endif Access file 'RootDir/Filename' If error found while accessing file then Return HTML Page with Message = error_message Else CookieString = "my_cookie=<MyCookie>" Return HTML Page with File='RootDir/FileName' and Cookie=CookieString Endif End
The PostLog has a fixed format that must be adhered to by the dynamic POST implementation. Differing from this format will cause the post-test validation to fail.
1st Line: RecordCount with a field-width of 10. The initial state of the PostLog should have this line, with the value of 0. The file is initialized by the dynamic command/Reset function, described in the Housekeeping Functions section.
Other Lines: All other PostLog records contain the following fields: RecordId, TimeStamp, Pid, Dir#, Class#, File#, Client#, FileName, Pid, MyCookie. The following printf format should be followed:
"%10d %10d %10d %5d %2d %2d %10d %-60.60s %10d %10d\n", RecordNum, TimeStamp, Pid, Dir#, Class#, File#, Client#, FileName, Pid, MyCookie
Sample PostLog: (NOTE: the '\' indicates line continuation and is not in the real file)
3 1 868560245 2155 0 1 1 10005 /www/docs/specw\ eb99/file_set/dir00000/class1_1 2155 20005 2 868560245 2155 0 1 4 10009 /www/docs/specw\ eb99/file_set/dir00000/class1_4 2155 30009 3 868560245 2154 0 2 4 10014 /www/docs/specw\ eb99/file_set/dir00000/class2_4 2154 30014
In a POST, the blank line following the last header is required before the POST input is given.
POST /specweb99/isapi/specweb99-POST.so HTTP/1.1 Host: bbb116 Content-Length: 61 Cookie: my_cookie=10011 urlroot=/specweb99/file_set/&dir=00000&class=0&num=0&client=1 POST /specweb99/cgi/specweb99-POST.pl HTTP/1.1 Host: server Content-Length: 61 Cookie: my_cookie=10033 urlroot=/specweb99/file_set/&dir=00000&class=0&num=0&client=1
The reported metric, SPECweb99, will be the median of the result of 3 consecutive valid iterations of the benchmark, using one invocation of manager with the requested load in simultaneous connections. The manager script must be used to initiate the test from the "prime client" and the SPECweb99 client daemon must be running on each load generator inorder to produce a valid result. The manager script will use the specperl supplied in the kit.
Each iteration will consist of a 5-minute ramp up period and a 20-minute measurement period followed by a 5-minute ramp down period. Furthermore, the start of the first iteration will be preceded by a warm up period of at least 20 minutes. A load generator will be considered conforming when it achieves an aggregate bit rate of at least 320,000 bits/second. A run is considered valid when at least 95% of the requested connections are conforming. The result value for each run is the number of conforming simultaneous connections.
The metric SPECweb99 may not be associated with any estimated results. This includes adding, multiplying or dividing measured results to create a derived metric.
The report of results for the SPECweb99 benchmark is generated in ASCII, Postscript, PDF, and HTML formats by the provided SPEC tools. These tools may not be changed, except for portability reasons with prior SPEC approval. The tools perform error checking and will flag some error conditions as resulting in an "invalid run". However, these automatic checks are only there for debugging convenience, and do not relieve the benchmarker of the responsibility to check the results and follow the run and reporting rules.
Detailed information on the content and format of the result reports
is included in the SPECweb99 User's Guide.
SPEC encourages use of the SPECweb99 benchmark in academic and research environments. It is understood that experiments in such environments may be conducted in a less formal fashion than that demanded of licensees submitting to the SPEC web site. For example, a research environment may use early prototype hardware or software that simply cannot be expected to function reliably for the length of time required to complete a compliant data point, or may use research hardware and/or software components that are not generally available. Nevertheless, SPEC encourages researchers to obey as many of the run rules as practical, even for informal research. SPEC respectfully suggests that following the rules will improve the clarity, reproducibility, and comparability of research results.
Where the rules cannot be followed, the deviations from the rules must be disclosed. SPEC requires these non-compliant results be clearly distinguished from results officially submitted to SPEC or those that may be published as valid SPECweb99 results. For example, a research paper can use simultaneous connections or ops/second but may not refer to them as SPECweb99 results if the results are not compliant.
The system configuration information that is required to duplicate published performance results must be reported. The following list is not intended to be all-inclusive; nor is each performance neutral feature in the list required to be described. The rule of thumb is if it affects performance or the feature is required to duplicate the results, describe it.
The following SUT hardware components must be reported:
The following SUT software components must be reported:
A brief description of the network configuration used to achieve the benchmark results is required. The minimum information to be supplied is:
The following client hardware components must be reported:
The dates of general customer availability must be listed for the major components: hardware, HTTP server, and operating system, month and year. All the system, hardware and software features are required to be generally available on or before date of publication, or within 3 months of the date of publication. With multiple components having different availability dates, the latest availability date should be listed.
Products are considered generally available if they are orderable by ordinary customers and ship within a reasonable time frame. This time frame is a function of the product size and classification, and common practice. The availability of support and documentation for the products must coincide with the release of the products.
Hardware products that are still supported by their original or primary vendor may be used if their original general availability date was within the last five years. The five-year limit is waived for hardware used in client systems.
Software products that are still supported by their original or primary vendor may be used if their original general availability date was within the last three years.
In the disclosure, the benchmarker must identify any component that is no longer orderable by ordinary customers
If pre-release hardware or software is tested, then the test sponsor represents that the performance measured is generally representative of the performance to be expected on the same configuration of the release system. If the sponsor later finds the performance to be lower than 5% of that reported for the pre-release system, then the sponsor shall resubmit a corrected test result.
The reporting page must list the date the test was performed, month and year, the organization which performed the test and is reporting the results, and the SPEC license number of that organization.
This section is used to document:
The following additional information may be required to be provided for SPEC's results review:
In order to minimize disk space requirements, the submitter is only required to keep the last 25 minutes of the log file for the duration of the review period.
Once you have a compliant run and wish to submit it to SPEC for review, you will need to provide the following:
Once you have the submission ready, please e-mail it to subweb99 @ spec.org
SPEC encourages the submission of results for review by the relevant subcommittee and subsequent publication on SPEC's web site. Licensees may publish compliant results independently; however, any SPEC member may request a full disclosure report for that result and the test sponsor must comply within 10 business days. Issues raised concerning a result's compliance to the run and reporting rules will be taken up by the relevant subcommittee regardless of whether or not the result was formally submitted to SPEC.
SPEC provides client driver software, which includes tools for running the benchmark and reporting its' results. This software implements various checks for conformance with these run and reporting rules. Therefore the SPEC software must be used; except that necessary substitution of equivalent functionality (e.g. file set generation) may be done only with prior approval from SPEC. Any such substitution must be reviewed and deemed "performance-neutral" by the OSSC.
Driver software includes C code (ANSI C) and Perl scripts (perl5). SPEC may provide pre-built versions of Perl 5.005_03 (i.e. specperl) and the driver code for some vendor platforms, or these may be recompiled from the provided source. SPEC requires the user to provide OS and web server software to support the RFC's as described in section 2.
Complete details on installing, building, and configuring the SPECweb99 benchmark can be found in the User's Guide included in the release distribution.
May 15, 2001