In Search of Credible Composites



Editor's Forum

Composite performance numbers are somewhat like film treatments of novels : a lot of decisions need to be made in the name of condensation and mass appeal and rarely are those decisions universally applauded.

In performance benchmarking, there are greater risks than even those associated with subjective judgments about what's important and what's not so important in the grand scheme of things. There's a chance that if a composite is done a certain way, it could give an advantage to one vendor's systems over another's. But, just as consumers like to see blockbuster novels come to the big screen, so do computer users like to see the one number that sums everything up.

Of course, it's not that simple. The increasingly complex world of computer graphics performance can't be summed up by a "one-size-fits-all" philosophy. Benchmarking organizations such as the GPC Group try to balance the need for accurate performance measurement with the public demand for "quotable" numbers. So, while it provides composite numbers, the committee encourages users to look at the individual reports when comparing systems. Ideally, the committee urges users to benchmark their own applications.

It took the Picture-Level Benchmark (PLB) project group more than three years to come up with its composite numbers. The XPC project group took more than a year to develop Xmark93. After more than a year of deliberation, the OPC project group decided on a single composite number for each Viewperf viewset.

In all three situations, the arrival of a composite number was met with gnashing of teeth and more than a few sighs that indicate "there's got to be a better way." Members of groups no doubt feel the same kind of regrets as film editors who have just reduced a hundred hours of raw film into the two-hour movie that is supposed to capture the novel perfectly.