RIAA equalization


RIAA equalization is a specification for the recording and playback of phonograph records, established by the Recording Industry Association of America. The purposes of the equalization are to permit greater recording times, to improve sound quality, and to reduce the groove damage that would otherwise arise during playback.
The RIAA equalization curve was intended to operate as a de facto global industry standard for records since 1954, but when the change actually took place is difficult to determine.
Before then, especially from 1940, each record company applied its own equalization; over 100 combinations of turnover and rolloff frequencies were in use, the main ones being Columbia-78, Decca-U.S., European, Victor-78, Associated, BBC, NAB, Orthacoustic, World, Columbia LP, FFRR-78 and microgroove, and AES. The obvious consequence was that different reproduction results were obtained if the recording and playback filtering were not matched.

The RIAA curve

RIAA equalization is a form of pre-emphasis on recording and de-emphasis on playback. A recording is made with the low frequencies reduced and the high frequencies boosted, and on playback, the opposite occurs. The net result is a flat frequency response, but with attenuation of high-frequency noise such as hiss and clicks that arise from the recording medium. Reducing the low frequencies also limits the excursions the cutter needs to make when cutting a groove. Groove width is thus reduced, allowing more grooves to fit into a given surface area, permitting longer recording times. This also reduces physical stresses on the stylus, which might otherwise cause distortion or groove damage during playback.
A potential drawback of the system is that rumble from the playback turntable's drive mechanism is amplified by the low-frequency boost that occurs on playback. Players must, therefore, be designed to limit rumble, more so than if RIAA equalization did not occur.
RIAA playback equalization is not a simple low-pass filter. It defines transition points in three places: 75 μs, 318 μs and 3180 μs, which correspond to 2122 Hz, 500 Hz and 50 Hz.
Mathematically, the pre-emphasis transfer function is expressed as follows, where T2=318 μs, etc.:
Implementing this characteristic is not especially difficult, but is more involved than a simple amplifier. In the past, almost all hi-fi preamplifiers, integrated amplifiers, and receivers had a built-in phono preamplifier with the RIAA characteristic, but it is often omitted in modern designs, due to the gradual obsolescence of vinyl records. Add-on phono preamplifiers with the RIAA equalization curve are available; these adapt a magnetic phono cartridge to an unbalanced −10 dBv consumer line-level RCA input. Some modern turntables feature built-in preamplification to the RIAA standard. Special preamplifiers are also available for the various equalization curves used on pre-1954 records.
Digital audio editors often feature the ability to equalize audio samples using standard and custom equalization curves, removing the need for a dedicated hardware preamplifier when capturing audio with a computer. However, this can add an extra step in processing a sample, and may amplify audio quality deficiencies of the sound card being used to capture the signal.

History

Origins of pre-emphasis

Equalization practice for electrical recordings dates to the beginning of the art. In 1926, Joseph P. Maxwell and Henry C. Harrison from Bell Telephone Laboratories disclosed that the recording pattern of the Western Electric "rubber line" magnetic disc cutter had a constant-velocity characteristic. This meant that as frequency increased in the treble, recording amplitude decreased. Conversely in the bass, as frequency decreased, recording amplitude increased. Therefore, attenuating the bass frequencies was necessary below about 250 Hz, the bass turnover point, in the amplified microphone signal fed to the recording head. Otherwise, bass modulation became excessive and overcutting took place, with the cutter getting into the next record groove. When played back electrically with a magnetic pickup having a smooth response in the bass region, a complementary boost in amplitude at the bass turnover point was necessary. G. H. Miller in 1934 reported that when complementary boost at the turnover point was used in radio broadcasts of records, the reproduction was more realistic and many of the musical instruments stood out in their true form.
West in 1930 and later P. G. H. Voight showed that the early Wente-style condenser microphones contributed to a 4- to 6-dB midrange brilliance or pre-emphasis in the recording chain. This meant that the electrical recording characteristics of Western Electric licensees such as Columbia Records and Victor Talking Machine Company had a higher amplitude in the midrange region. Brilliance such as this compensated for dullness in many early magnetic pickups having drooping midrange and treble response. As a result, this practice was the empirical beginning of using pre-emphasis above 1,000 Hz in 78 and 33 rpm records, some 29 years before the RIAA curve.
Over the years, a variety of record equalization practices emerged, with no industry standard. For example, in Europe, for many years recordings required playback with a bass turnover setting of 250 to 300 Hz and a treble rolloff at 10,000 Hz ranging from 0 to −5 dB, or more. In the United States, practices varied and a tendency arose to use higher bass turnover frequencies, such as 500 Hz, as well as a greater treble rolloff such as −8.5 dB, and more. The purpose was to record higher modulation levels on the record.

Standardization

Evidence from the early technical literature concerning electrical recording suggests that serious efforts to standardize recording characteristics within an industry did not occur until 1942–1949. Before this time, electrical recording technology from company to company was considered a proprietary art all the way back to the 1925 Western Electric licensed method first used by Columbia and Victor. For example, what Brunswick-Balke-Collender did was different from the practices of Victor.
Broadcasters were faced with having to adapt daily to the varied recording characteristics of many sources - various makers of "home recordings" readily available to the public, European recordings, lateral cut transcriptions, and vertical cut transcriptions. Efforts were started in 1942 to standardize within the National Association of Broadcasters, later known as the National Association of Radio and Television Broadcasters. The NAB, among other items, issued recording standards in 1942 and 1949 for laterally and vertically cut records, principally transcriptions. A number of 78 rpm record producers, as well as early LP makers, also cut their records to the NAB lateral standard.
The lateral-cut NAB curve was remarkably similar to the NBC Orthacoustic curve, which evolved from practices within the National Broadcasting Company since the mid-1930s. Empirically, and not by any formula, the bass end of the audio spectrum below 100 Hz could be boosted somewhat to override system hum and turntable rumble noises. Likewise at the treble end beginning at 1,000 Hz, if audio frequencies were boosted by 16 dB at 10,000 Hz the delicate sibilant sounds of speech and high overtones of musical instruments could be heard despite the high background noise of shellac discs. When the record was played back using a complementary inverse curve, signal-to-noise ratio was improved and the programming sounded more lifelike.
In a related area, around 1940 treble pre-emphasis similar to that used in the NBC Orthacoustic recording curve was first employed by Edwin Howard Armstrong in his system of frequency modulation radio broadcasting. FM radio receivers using Armstrong circuits and treble de-emphasis would render high-quality, wide-range audio output with low noise levels.
When the Columbia LP was released in June 1948, the developers subsequently published technical information about the 33 rpm, microgroove, long-playing record. Columbia disclosed a recording characteristic showing that it was like the NAB curve in the treble, but had more bass boost or pre-emphasis below about 150 Hz. The authors disclosed electrical network characteristics for the Columbia LP curve. Nevertheless, the curve was not yet based on mathematical formulae, at least not explicitly.
In 1951, at the beginning of the post-World War II high fidelity popularity, the Audio Engineering Society developed a standard playback curve. This was intended for use by hi-fi amplifier manufacturers. If records were engineered to sound good on hi-fi amplifiers using the AES curve, this would be a worthy goal towards standardization. This curve was defined by the transition frequencies of audio filters and had a pole at 2.5 kHz and a zero at 400 Hz.
RCA Victor and Columbia were in a "market war" concerning which recorded format was going to win: the Columbia LP versus the RCA Victor 45 rpm disc. Besides also being a battle of disc size and record speed, there was a technical difference in the recording characteristics. RCA Victor was using "New Orthophonic", whereas Columbia was using their own LP curve.
Ultimately, the New Orthophonic curve was disclosed in a publication by R. C. Moyer of RCA Victor in 1953; additional background information about this evolution can also be found in another article of the same author, published in 1957. He traced the RCA Victor characteristics back to the Western Electric "rubber line" recorder in 1925 up to the early 1950s laying claim to long-held recording practices and reasons for major changes in the intervening years. The RCA Victor New Orthophonic curve was within the tolerances for the NAB/NARTB, Columbia LP, and AES curves. It eventually became the technical predecessor to the RIAA curve.
Between 1953 and 1956, several standards bodies around the world adopted the same playback curve—identical to the RCA Victor New Orthophonic curve—which became standard throughout the national and international record markets. However, although these standards were all identical, no universal name was used. One of the standards was called simply "RIAA", and it is likely that this name was eventually adopted because it was memorable.
Some niche record cutters possibly were still using EQ curves other than the RIAA curve well into the 1970s. As a result, some audio manufacturers today produce phono equalizers with selectable EQ curves, including options for Columbia LP, Decca, CCIR, and TELDEC's Direct Metal Mastering.

The Enhanced RIAA curve

The official RIAA standard defines three time-constants with pre-emphasis rising indefinitely above 75 μs, but in practice this is not possible. When the RIAA equalization standard was written the inherent bandwidth limitations of the recording equipment and cutting amplifier imposed their own ultimate upper limit on the pre-emphasis characteristic, so no official upper limit was included in the RIAA definition.
Modern systems have far wider potential bandwidth. An essential feature of all cutting amplifiers—including the Neumann cutting amplifiers—is a forcibly imposed high frequency roll-off above the audio band. This implies two or more additional time constants to those defined by the RIAA curve. This is not standardized anywhere, but set by the maker of the cutting amplifier and associated electronics.
The so-called "Enhanced RIAA" curve or "eRIAA" curve attempts to provide complementary correction for these unofficial time constants upon playback.

Background

In 1995, a nonacademic source erroneously suggested that Neumann cutting amplifiers applied a single high-frequency pole at 3.18 μs and that a complementary zero should therefore be included upon playback. However, no such pole exists.
For example, the RIAA pre-emphasis in the popular Neumann SAB 74B equalizer reaches a maximum at 100 kHz, and in addition to this, the circuit also applies a second-order roll off at 49.9 kHz, implemented by a Butterworth active filter, plus an additional pole at 482 kHz. This cannot be compensated for by a simple zero even if it were necessary, and in any case, other amplifiers will differ. Correction upon playback is not, in fact, required, as it is taken into account at the cutting stage when manual equalization is applied while monitoring initial cuts on a standard RIAA playback system. Nevertheless, the use of the erroneous zero remains a subject of some debate among amateur enthusiasts.
Many common phono preamplifier designs using negative feedback equalization include an unintentional zero at high frequencies, similar to that proposed by Wright. This was illustrated, for example, in the seminal 1980 work on RIAA playback equalization by Lipshitz/Jung, though it was noted as unwanted.
Some phono preamplifiers include additional circuitry to correct this and ensure that the output follows the RIAA curve accurately. In most, however, this is omitted.

IEC RIAA curve

In 1976, an alternative version of the replay curve was proposed by the International Electrotechnical Commission, differing from the RIAA replay curve only in the addition of a pole at 7950 μs. The justification was to reduce the subsonic output of the phono amplifier caused by disk warp and turntable rumble.
This so-called IEC amendment to the RIAA curve is not universally seen as desirable, as it introduces considerable amplitude and—of more concern—phase errors into the low-frequency response during playback. The simple first-order roll-off also provides only very mild reduction of rumble, and many manufacturers consider that turntables, arm, and cartridge combinations should be of sufficient quality for problems not to arise.
Some manufacturers follow the IEC standard, others do not, while the remainder make this IEC-RIAA option user selectable. It remains subject to debate some 35 years later. This IEC Amendment was withdrawn in June 2009, though.

TELDEC/DIN Curve

Telefunken and Decca founded a record company that used a characteristic which was also proposed for German DIN Standards in July 1957. Incidentally, this Standards proposal defined exactly the same characteristic as the intermediate CCIR Recommendation No. 208 of 1956, which was valid until about mid 1959. Nevertheless, the DIN Standards proposal was adopted in April 1959, that is, at a time when the RIAA characteristic was already well-established; and it was in effect until November 1962, when the German DIN finally adopted the RIAA characteristic. The extent of usage of the Teldec characteristic is unclear, though.
The time constants of the Teldec characteristic are 3180 μs, 318 μs, and 50 μs, thus differing only in the third value from the corresponding RIAA values. Although the Teldec characteristic is close to the RIAA characteristic, it is different enough for recordings recorded with the former and played back with the latter to sound a little dull.