Benchmarking Breeds Controversy
Editor's Forum
By its very nature, graphics performance benchmarking is controversial;
especially when you are doing it across different hardware architectures
and APIs. How couldn't there be controversy? You are taking different
systems with different performance assets and trying to apply a single
criteria to measure performance. And not just any kind of performance,
but graphics performance, arguably one of the most difficult characteristics
of a computer system to measure.
The situation becomes even more heated when you try to take a series
of performance numbers and boil them down to a single number that is
supposed to be the final word on performance measurement. In actuality,
everyone knows what the composite number is -- it's the performance
equivalent of a sound bite (I'll leave it to the reader to supply the
bite/byte pun).
The reason more than a dozen prominent vendors and researchers on the
GPC Group take on this seemingly thankless task is simple: standardized
performance measurement is desperately needed in the industry, by both
vendors and users. But, just because it's a good cause doesn't mean
everyone is going to agree about how to go about it. Good intentions
don't necessarily lead to equanimity when it gets down to the dirty
details.
Certainly the GPC Group has had its share of controversy. Compromises
are reached, but the controversy never really goes away; it's endemic
to performance benchmarking. As evidenced by the performance benchmarking
results found in this publication, plenty of progress has been made.
And, no doubt there will be plenty of progress to report over the upcoming
months. But, as always, it's likely to be ushered in by controversy.
|