Skip navigation

Standard Performance Evaluation Corporation

Facebook logo LinkedIn logo Twitter logo
 
 

SPECsfs2008 FAQ

  1. What is SPECsfs2008 and how does this benchmark compare to other network file system (NFS/CIFS) benchmarks?
  2. Does this benchmark replace the SPEC SFS 3.0 suite?
  3. Can SPECsfs2008 results be compared to SFS 3.0 results?
  4. What improvements have been made to SPECsfs2008?
  5. How was the SPECsfs2008 workload determined?
  6. What is the metric for SPECsfs2008?
  7. Are the metrics for SPECsfs2008 different than the metric for SFS 3.0?
  8. How widespread are NFS and CIFS?
  9. What is the correlation between the TPC (Transaction Processing Council) and SPEC (Storage Performance Council) benchmarks, including SPECsfs2008?
  10. Is SPECsfs2008 a CPU-intensive or I/O-intensive benchmark?
  11. For what computing environment is SPECsfs2008 designed?
  12. Can users measure NFS performance for workloads other than the one provided within SPECsfs2008?
  13. To what extent is the server's measured performance within SPECsfs2008 affected by the client's performance?
  14. How does SPEC validate numbers that it publishes?
  15. Are the reported SPECsfs2008 configurations typical of systems sold by vendors?
  16. Do the SPECsfs2008 run and disclosure rules allow results for a clustered server?
  17. Why do so few published results approach SPEC's response-time threshold cutoff of 20 milliseconds?
  18. Why was the response-time threshold reduced from 40 ms for SFS 3.0 to 20 ms for SPECsfs2008?
  19. What resources are needed to run the SPECsfs2008 benchmark?
  20. What is the estimated time needed to set up and run SPECsfs2008?
  21. What shared resources does SPECsfs2008 use that might limit performance?
  22. SPEC's CPU2006 benchmark defines compiler optimization flags that can be used in testing. Does SPECsfs2008 set tuning parameters?
  23. Can a RAM disk be used within a SPECsfs2008 configuration?
  24. How will the choice of networks affect SPECsfs2008 results?
  25. Is SPECsfs2008 scalable with respect to CPU, cache, memory, disks, controllers and faster transport media?
  26. What is the price of a SPECsfs2008 license and when will it be available?
  27. How much is an upgrade from SFS 3.0 to SPECsfs2008?
  28. Can users get help in understanding how to run SPECsfs2008?

Running and troubleshooting the benchmark

  1. Do I need to measure NFS and CIFS?
  2. How do I get started running the SPECsfs2008 benchmark?
  3. I am running into problems setting up and running the benchmark. What can I do?
  4. I have read the SPECsfs2008 User's Guide. But I am still running into problems. What can I do next?
  5. How does one abort a run?
  6. For a valid run, which parameters are required to be unchanged?
  7. Is there a quick way to debug a testbed?
  8. When I specify 1000 NFS ops/sec in the sfs_nfs_rc, the results report only 996 NFS ops/sec requested, why is it less?
  9. The number of operations/second that I achieve is often slightly higher or slightly lower than the requested load. Is this a problem?

Tuning the Server

  1. What are a reasonable set of parameters for running the benchmark?
  2. When I request loads of 1000, 1300, 1600 NFSops, I get 938, 1278, and 1298 NFSops, respectively. Why do I not get the requested load?
  3. How do I increase the performance of our server?

Submission of Results

  1. We have a valid set of results. How do we submit these results to SPEC?
Question 1: What is SPECsfs2008 and how does this benchmark compare to other network file system (NFS/CIFS) benchmarks?

SPECsfs2008 is the latest version of the Standard Performance Evaluation Corp.'s benchmark that measures CIFS and NFS file server throughput and response time. It differs from other file server benchmarks in that it provides a standardized method for comparing performance across different vendor platforms. The benchmark was written to be client-independent and vendor-neutral. Results are validated through peer review before publication on SPEC's public Web site http://www.spec.org/sfs2008/.

Question 2: Does this benchmark replace the SPEC SFS 3.0 suite?

Yes. Now that SPECsfs2008 is available, SFS 3.0 licenses are no longer being sold. Results from SFS 3.0 will no longer be accepted by SPEC for publication.

Question 3: Can SPECsfs2008 results be compared to SFS 3.0 results?

No. Although the benchmarks are similar in many ways, they cannot be compared, since SPECsfs2008 uses a different file selection algorithm, its results can only be compared with other SPECsfs2008 results.

Question 4: What improvements have been made to SPECsfs2008?

In addition to general code improvements, SPECsfs2008 includes major enhancements: 1. A workload to test servers accessible via the CIFS protocol. 2. Support for Windows and Mac OSX clients. 3. Enhancements to the NFS workload. 4. Removal of dependency on UNIX specific commands, such as rsh and rcp. 5. A more flexible reporting form which allows for a wider array of modern system configurations to be accurately detailed.

Question 5: How was the SPECsfs2008 workload determined?

The SPECsfs2008 NFS and CIFS workloads are based primarily on data collected from tens of thousands of fileservers from member companies, deployed by customers in a variety of file-serving application environments. The bulk of the data was collected by mining databases that hold the data received via automatic reporting systems embedded in products from member companies. To provide further information, NFS/CIFS packet trace data was collected from a number of customer and member company internal systems. The resulting workload in SPECsfs2008 more accurately represents a composite of the workloads seen in current fileserving environments.

Question 6: What is the metric for SPECsfs2008?

SPECsfs2008 has two performance measurement metrics: SPECsfs2008_nfs for NFS, and SPECsfs2008_CIFS for CIFS. Both metrics include a throughput measure (in operations per second) and an overall response time measure (the average response time per operation).

Question 7: Are the metrics for SPECsfs2008 different than the metric for SFS 3.0?

Yes. SPECsfs2008 maintains similar metrics that were used in SFS 3.0, but it also now provides metrics for CIFS. It provides overall response time and peak throughput. The larger the peak throughput the better. The lower the overall response time the better. The overall response time is an indicator of how quickly the system under test responds to NFS, or CIFS, operations over the entire range of the tested load. In real-world situations, servers are not run continuously at peak throughput, so peak response time provides only minimal information. The overall response time is a measure of how the system will respond under an average load. Mathematically, the value is derived by calculating the area under the curve divided by the peak throughput.

Question 8: How widespread are NFS and CIFS?

NFS has been shipping on systems for more than sixteen years and is available for most systems. CIFS is the dominant remote file system protocol for all Windows systems.

Question 9: What is the correlation between the TPC (Transaction Processing Council) and SPEC (Storage Performance Council) benchmarks, including SPECsfs2008?

There is no correlation; the benchmarks present very different workloads on the systems under test and measure different aspects of system performance .

Question 10: Is SPECsfs2008 a CPU-intensive or I/O-intensive benchmark?

SPECsfs2008 is a system-level benchmark that heavily exercises CPU, mass storage and network components. The greatest emphasis is on I/O, especially as it relates to operating and file system software. To obtain the best performance for a system running SPECsfs2008, the vendor will typically add additional hardware — such as memory, disk controllers, disks, network controllers and buffer cache — as needed in order to help alleviate I/O bottlenecks and to ensure that server CPUs are used fully.

Question 11: For what computing environment is SPECsfs2008 designed?

The benchmark was developed for load-generating clients running in the UNIX or Windows. But since the load-generating clients execute the benchmark code, SPECsfs2008 can be used to evaluate the performance of any CIFS and NFS file server, regardless of the underlying environment.

Question 12: Can users measure NFS performance for workloads other than the one provided within SPECsfs2008?

Yes, users can measure their own workloads by making changes to the SPECsfs2008 benchmark mix parameters to reflect the new measurements. The SPECsfs2008 User's Guide details how this can be done. Workloads created by users cannot, however, be compared with SPECsfs2008 results, nor can they be published in any form, as specified within the SPECsfs2008 license.

Question 13: To what extent is the server's measured performance within SPECsfs2008 affected by the client's performance?

SPEC has written SPECsfs2008 to minimize the effect of client performance on SPECsfs2008 results.

Question 14: How does SPEC validate numbers that it publishes?

Results published on the SPEC Web site have been reviewed by SPEC members for compliance with the SPECsfs2008 run and disclosure rules, but there is no monitoring beyond that compliance check. The vendors that performed the tests and submitted the performance numbers have sole responsibility for the results. SPEC is not responsible for any measurement or publication errors.

Question 15: Are the reported SPECsfs2008 configurations typical of systems sold by vendors?

Yes and no. They are similar to large server configurations, but the workload is heavier than that found on smaller server configurations. SPEC has learned from experience that today's heavy workload is tomorrow's light workload. For some vendors, the configurations are typical of what they see in real customer environments, particularly those incorporating high-end servers. For other vendors, SPECsfs2008 configurations might not be typical.

Question 16: Do the SPECsfs2008 run and disclosure rules allow results for a clustered server?

Yes, cluster configurations are allowed as long as they conform strictly to the even distribution of all resources as defined by the SPECsfs2008 run and disclosure rules.

Question 17: Why do so few published results approach SPEC's response-time threshold cutoff of 20 milliseconds?

It is important to understand first that SPECsfs2008 run rules do not require that the throughput curve be carried out to 20 ms; they only state that the results cannot be reported for a response time higher than 20 ms. There are several reasons why results do not approach the threshold cutoff. Optimally configured servers often will achieve their maximum throughput at response times lower than the cutoff. Additionally, some vendors emphasize maximum throughput while others concentrate on fast response time. It does not indicate a problem with the results if the curve is not carried out to 20 ms, and those reviewing results should not try to predict what the throughput curve might be past the reported point.

Question 18: Why was the response-time threshold reduced from 40 ms for SFS 3.0 to 20 ms for SPECsfs2008 ?

The lower response-time threshold reflects advances in server technologies since the release of SFS 3.0.

Question 19: What resources are needed to run the SPECsfs2008 benchmark?

In addition to a server, a test bed includes several clients and an appropriate number of networks. Ideally, the server should have enough memory, disks and network hardware to saturate the CPU. The test bed requires at least one network. A minimum of 256 MB of memory is required for each client, although in most cases 512 MB is needed. To facilitate accuracy of reported vendor results, SPECsfs2008 includes an entire NFS, and CIFS implementation. Examples of typical load-generating configurations can be found on the SPEC Web site: http://www.spec.org/sfs2008/.

Question 20: What is the estimated time needed to set up and run SPECsfs2008?

Hardware setup and software installation time depend on the size of the server and the complexity of the test beds. Many servers require large and complex test beds. The SPECsfs2008 software installs relatively quickly. A SPECsfs2008 submission from a vendor includes at least 10 data points, with each data point taking from 30 to 90 minutes to complete.

Question 21: What shared resources does SPECsfs2008 use that might limit performance?

Shared resources that might limit performance include CPU, memory, disk controllers, disks, network controllers, network concentrators, network switches, clients, etc.

Question 22: SPEC's CPU2006 benchmark defines compiler optimization flags that can be used in testing. Does SPECsfs2008 set tuning parameters?

When submitting results for SPEC review, vendors are required to supply a description of all server tuning parameters within the disclosure section of the reporting page.

Question 23: Can a RAM disk be used within a SPECsfs2008 configuration?

SPEC enforces strict storage rules for stability. Generally, RAM disks do not meet these rules, since they often cannot survive cascading failure-recovery requirements unless an uninterruptible power supply (UPS) with long survival times is used.

Question 24: How will the choice of networks affect SPECsfs2008 results?

Different link types and even different implementations of the same link type might affect the measured performance -- for better or worse -- of a particular server. Consequently, the results measured by clients in these situations might vary as well.

Question 25: Is SPECsfs2008 scalable with respect to CPU, cache, memory, disks, controllers and faster transport media?

Yes, like SFS 3.0, the new benchmark is scalable as users migrate to faster technologies.

Question 26: What is the price of a SPECsfs2008 license and when will it be available?

SPECsfs2008 is available now on CD-ROM for $1600. Contact the SPEC office: Standard Performance Evaluation Corporation (SPEC) 6585 Merchant Place, Suite 100 Warrenton, VA 20187, USA Phone: 540-349-7878 Fax: 540-349-5992 E-Mail: info@spec.org

Question 27: How much is an upgrade from SFS 3.0 to SPECsfs2008?

The SPECsfs2008 benchmark is a major new release. The upgrade is $700 for those who purchased SFS 3.0 licenses within 90 days prior to the SPECsfs2008 release. Any purchases after that will be at the full price. Upgrades are available through the SPEC office.

Question 28: Can users get help in understanding how to run SPECsfs2008?

The majority of questions should be answered in the SPECsfs2008 User's Guide. There is also useful information on the SPEC Web site: http://www.spec.org/sfs2008/.

Running and troubleshooting the benchmark

Question 29: Do I need to measure NFS and CIFS?

No. NFS and CIFS are separate workloads and you only need to measure and disclose the ones you want.

Question 30: How do I get started running the SPECsfs2008 benchmark?

Please read the SPECsfs2008 User's Guide in its entirety.

Question 31: I am running into problems setting up and running the benchmark. What can I do?

The most common problem is usually that file server file systems are not being correctly mounted on the clients. Most of the problems relating to the SPECsfs2008 benchmark can be resolved by referring to appropriate sections of the User's Guide, including this FAQ.

Question 32: I have read the SPECsfs2008 User's Guide. But I am still running into problems. What can I do next?

Looking at the sfslog.* and sfscxxx.* files can give you an idea as to what may have gone wrong. In addition, you can check the Troubleshooting SPECsfs2008 web page on the SPEC website. And, as a last resort, you can contact SPEC at support@spec.org. It is assumed that such calls/emails are from people who have read the SPECsfs2008 User's Guide completely, and have met all the prerequisites for setting up and running the benchmark.

Question 33: How does one abort a run?

The benchmark can be aborted by simply stopping the SfsManager. This will kill all SFS related processes on all clients and on the prime client. The processes are sfscifs, sfsnfs3, sfs_syncd and sfs_prime.

Question 34: For a valid run, which parameters are required to be unchanged?

Information is provided in the SFS2008 Run and Reporting Rules and in the sfs_nfs_rc, and sfs_cifs_rc files, and this is enforced by the benchmark. If invalid parameter values are selected, the benchmark reports an invalid run.

Question 35: Is there a quick way to debug a testbed?

Read the SPECsfs2008 User's Guide, ping the server from the client, try mounting the server file systems or shares from the client using the client's real CIFS or NFS implementation, ping from the prime client to the other clients and vice versa, run the benchmark with one client and one file system.

Question 36: When I specify 1000 NFS ops/sec in the sfs_nfs_rc, the results report only 996 NFS ops/sec requested, why is it less?

The sfs_nfs_rc file specifies the total number of NFS ops/sec across all of the clients used. Because the benchmark only allows specifying an even number of NFS ops/sec, the actual requested ops/ sec may be less due to rounding down. For example, 1000 NFS ops/sec requested over 6 clients will result in each client generating 166 NFS ops/sec for an aggregate of 996 NFS ops/sec.

Question 37: The number of operations/second that I achieve is often slightly higher or slightly lower than the requested load. Is this a problem?

No, the benchmark generates operations using random selection and dynamic feedback to pace correctly. This will result in small difference from the actual requested load.

Tuning the Server

Question 38: What are a reasonable set of parameters for running the benchmark?

Study existing results' pages with configuration information similar to your system configuration.

Question 39: When I request loads of 1000, 1300, 1600 NFSops, I get 938, 1278, and 1298 NFSops, respectively. Why do I not get the requested load?

This may happen when one has reached the server limit for a particular configuration. One needs to determine the bottleneck, and possibly tune and/or enhance the server configuration.

Question 40: How do I increase the performance of our server?

One may need to add, as necessary, one or more of the following: processors, memory, disks, controllers, etc.

Submission of Results

Question 41: We have a valid set of results. How do we submit these results to SPEC?

See the Submission and Review Process section. The new submission tool documentation is in that section.