Fillrate


The term pixel fillrate refers to the number of pixels a video card can render to screen and write to video memory in a second or in case of texture fillrate the number of texture map elements GPU can map to pixels in a second. Pixel fillrates are given in megapixels per second or in gigapixels per second, and they are obtained by multiplying the number of Raster Output Units by the clock frequency of the graphics processor unit of a video card and texture fillrate is obtained by multiplying the number of Texture Mapping Units by the clock frequency of the graphics processing unit. Texture fillrates are given in mega or gigatexels per second. However, there is no full agreement on how to calculate and report fillrates. Other possible method is: to multiply the number of pixel pipelines by the clock frequency.
The results of these multiplications correspond to a theoretical number. The actual fillrate depends on many other factors. In the past, the fillrate has been used as an indicator of performance by video card manufacturers such as ATI and NVIDIA, however, the importance of the fillrate as a measurement of performance has declined as the bottleneck in graphics applications has shifted. For example, today, the number and speed of unified shader processing units has gained attention.
Scene complexity can be increased by overdrawing, which happens when "an object is drawn to the frame buffer, and then another object is drawn on top of it, covering it up. The time spent drawing the first object was wasted because it isn't visible." When a sequence of scenes is extremely complex, the frame rate for the sequence may drop. When designing graphics intensive applications, one can determine whether the application is fillrate-limited by seeing if the frame rate increases dramatically when the application runs at a lower resolution or in a smaller window.