Benchmark (computing)


In computing, a benchmark is the act of running a computer program, a set of programs, or other operations, in order to assess the relative performance of an object, normally by running a number of standard tests and trials against it.
The term benchmark is also commonly utilized for the purposes of elaborately designed benchmarking programs themselves.
Benchmarking is usually associated with assessing performance characteristics of computer hardware, for example, the floating point operation performance of a CPU, but there are circumstances when the technique is also applicable to software. Software benchmarks are, for example, run against compilers or database management systems.
Benchmarks provide a method of comparing the performance of various subsystems across different chip/system architectures.
Test suites are a type of system intended to assess the correctness of software.

Purpose

As computer architecture advanced, it became more difficult to compare the performance of various computer systems simply by looking at their specifications. Therefore, tests were developed that allowed comparison of different architectures. For example, Pentium 4 processors generally operated at a higher clock frequency than Athlon XP or PowerPC processors, which did not necessarily translate to more computational power; a processor with a slower clock frequency might perform as well as or even better than a processor operating at a higher frequency. See BogoMips and the megahertz myth.
Benchmarks are designed to mimic a particular type of workload on a component or system. Synthetic benchmarks do this by specially created programs that impose the workload on the component. Application benchmarks run real-world programs on the system. While application benchmarks usually give a much better measure of real-world performance on a given system, synthetic benchmarks are useful for testing individual components, like a hard disk or networking device.
Benchmarks are particularly important in CPU design, giving processor architects the ability to measure and make tradeoffs in microarchitectural decisions. For example, if a benchmark extracts the key algorithms of an application, it will contain the performance-sensitive aspects of that application. Running this much smaller snippet on a cycle-accurate simulator can give clues on how to improve performance.
Prior to 2000, computer and microprocessor architects used SPEC to do this, although SPEC's Unix-based benchmarks were quite lengthy and thus unwieldy to use intact.
Computer manufacturers are known to configure their systems to give unrealistically high performance on benchmark tests that are not replicated in real usage. For instance, during the 1980s some compilers could detect a specific mathematical operation used in a well-known floating-point benchmark and replace the operation with a faster mathematically equivalent operation. However, such a transformation was rarely useful outside the benchmark until the mid-1990s, when RISC and VLIW architectures emphasized the importance of compiler technology as it related to performance. Benchmarks are now regularly used by compiler companies to improve not only their own benchmark scores, but real application performance.
CPUs that have many execution units — such as a superscalar CPU, a VLIW CPU, or a reconfigurable computing CPU — typically have slower clock rates than a sequential CPU with one or two execution units when built from transistors that are just as fast. Nevertheless, CPUs with many execution units often complete real-world and benchmark tasks in less time than the supposedly faster high-clock-rate CPU.
Given the large number of benchmarks available, a manufacturer can usually find at least one benchmark that shows its system will outperform another system; the other systems can be shown to excel with a different benchmark.
Manufacturers commonly report only those benchmarks that show their products in the best light. They also have been known to mis-represent the significance of benchmarks, again to show their products in the best possible light. Taken together, these practices are called bench-marketing.
Ideally benchmarks should only substitute for real applications if the application is unavailable, or too difficult or costly to port to a specific processor or computer system. If performance is critical, the only benchmark that matters is the target environment's application suite.

Challenges

Benchmarking is not easy and often involves several iterative rounds in order to arrive at predictable, useful conclusions. Interpretation of benchmarking data is also extraordinarily difficult. Here is a partial list of common challenges:
There are seven vital characteristics for benchmarks. These key properties are:

Relevance: Benchmarks should measure relatively vital features.

Representativeness: Benchmark performance metrics should be broadly accepted by industry and academia.

Equity: All systems should be fairly compared.

Repeatability: Benchmark results can be verified.

Cost-effectiveness: Benchmark tests are economical.

Scalability: Benchmark tests should work across systems possessing a range of resources from low to high.

Transparency: Benchmark metrics should be easy to understand.

Types of benchmark

  1. Real program
  2. *word processing software
  3. *tool software of CAD
  4. *user's application software
  5. Component Benchmark / Microbenchmark
  6. *core routine consists of a relatively small and specific piece of code.
  7. *measure performance of a computer's basic components
  8. *may be used for automatic detection of computer's hardware parameters like number of registers, cache size, memory latency, etc.
  9. Kernel
  10. *contains key codes
  11. *normally abstracted from actual program
  12. *popular kernel: Livermore loop
  13. *linpack benchmark
  14. *results are represented in Mflop/s.
  15. Synthetic Benchmark
  16. *Procedure for programming synthetic benchmark:
  17. **take statistics of all types of operations from many application programs
  18. **get proportion of each operation
  19. **write program based on the proportion above
  20. *Types of Synthetic Benchmark are:
  21. **Whetstone
  22. **Dhrystone
  23. *These were the first general purpose industry standard computer benchmarks. They do not necessarily obtain high scores on modern pipelined computers.
  24. I/O benchmarks
  25. Database benchmarks
  26. * measure the throughput and response times of database management systems
  27. Parallel benchmarks
  28. * used on machines with multiple cores and/or processors, or systems consisting of multiple machines

    Common benchmarks

Industry standard (audited and verifiable)