Hanbury Brown and Twiss effect
In physics, the Hanbury Brown and Twiss effect is any of a variety of correlation and anti-correlation effects in the intensities received by two detectors from a beam of particles. HBT effects can generally be attributed to the wave–particle duality of the beam, and the results of a given experiment depend on whether the beam is composed of fermions or bosons. Devices which use the effect are commonly called intensity interferometers and were originally used in astronomy, although they are also heavily used in the field of quantum optics.
History
In 1954, Robert Hanbury Brown and Richard Q. Twiss introduced the intensity interferometer concept to radio astronomy for measuring the tiny angular size of stars, suggesting that it might work with visible light as well. Soon after they successfully tested that suggestion: in 1956 they published an in-lab experimental mockup using blue light from a mercury-vapor lamp, and later in the same year, they applied this technique to measuring the size of Sirius. In the latter experiment, two photomultiplier tubes, separated by a few meters, were aimed at the star using crude telescopes, and a correlation was observed between the two fluctuating intensities. Just as in the radio studies, the correlation dropped away as they increased the separation, and they used this information to determine the apparent angular size of Sirius., who was awarded the 2005 Nobel Prize in Physics "for his contribution to the quantum theory of optical coherence".
This result was met with much skepticism in the physics community. The radio astronomy result was justified by Maxwell's equations, but there were concerns that the effect should break down at optical wavelengths, since the light would be quantised into a relatively small number of photons that induce discrete photoelectrons in the detectors. Many physicists worried that the correlation was inconsistent with the laws of thermodynamics. Some even claimed that the effect violated the uncertainty principle. Hanbury Brown and Twiss resolved the dispute in a neat series of articles that demonstrated, first, that wave transmission in quantum optics had exactly the same mathematical form as Maxwell's equations, albeit with an additional noise term due to quantisation at the detector, and second, that according to Maxwell's equations, intensity interferometry should work. Others, such as Edward Mills Purcell immediately supported the technique, pointing out that the clumping of bosons was simply a manifestation of an effect already known in statistical mechanics. After a number of experiments, the whole physics community agreed that the observed effect was real.
The original experiment used the fact that two bosons tend to arrive at two separate detectors at the same time. Morgan and Mandel used a thermal photon source to create a dim beam of photons and observed the tendency of the photons to arrive at the same time on a single detector. Both of these effects used the wave nature of light to create a correlation in arrival time – if a single photon beam is split into two beams, then the particle nature of light requires that each photon is only observed at a single detector, and so an anti-correlation was observed in 1977 by H. Jeff Kimble. Finally, bosons have a tendency to clump together, giving rise to Bose–Einstein correlations, while fermions due to the Pauli exclusion principle, tend to spread apart, leading to Fermi–Dirac correlations. Bose–Einstein correlations have been observed between pions, kaons and photons, and Fermi–Dirac correlations between protons, neutrons and electrons. For a general introduction in this field, see the textbook on Bose–Einstein correlations by Richard M. Weiner A difference in repulsion of Bose–Einstein condensate in the "trap-and-free fall" analogy of the HBT effect affects comparison.
Also, in the field of particle physics, Goldhaber et al. performed an experiment in 1959 in Berkeley and found an unexpected angular correlation among identical pions, discovering the ρ0 resonance, by means of decay. From then on, the HBT technique started to be used by the heavy-ion community to determine the space–time dimensions of the particle emission source for heavy-ion collisions. For recent developments in this field, see for example the review article by Lisa.
Wave mechanics
The HBT effect can, in fact, be predicted solely by treating the incident electromagnetic radiation as a classical wave. Suppose we have a monochromatic wave with frequency on two detectors, with an amplitude that varies on timescales slower than the wave period.Since the detectors are separated, say the second detector gets the signal delayed by a time, or equivalently, a phase ; that is,
The intensity recorded by each detector is the square of the wave amplitude, averaged over a timescale that is long compared to the wave period but short compared to the fluctuations in :
where the overline indicates this time averaging. For wave frequencies above a few terahertz, such a time averaging is unavoidable, since detectors such as photodiodes and photomultiplier tubes cannot produce photocurrents that vary on such short timescales.
The correlation function of these time-averaged intensities can then be computed:
Most modern schemes actually measure the correlation in intensity fluctuations at the two detectors, but it is not too difficult to see that if the intensities are correlated, then the fluctuations, where is the average intensity, ought to be correlated, since
In the particular case that consists mainly of a steady field with a small sinusoidally varying component, the time-averaged intensities are
with, and indicates terms proportional to, which are small and may be ignored.
The correlation function of these two intensities is then
showing a sinusoidal dependence on the delay between the two detectors.
Quantum interpretation
The above discussion makes it clear that the Hanbury Brown and Twiss effect can be entirely described by classical optics. The quantum description of the effect is less intuitive: if one supposes that a thermal or chaotic light source such as a star randomly emits photons, then it is not obvious how the photons "know" that they should arrive at a detector in a correlated way. A simple argument suggested by Ugo Fano captures the essence of the quantum explanation. Consider two points and in a source that emit photons detected by two detectors and as in the diagram. A joint detection takes place when the photon emitted by is detected by and the photon emitted by is detected by or when 's photon is detected by and 's by . The quantum mechanical probability amplitudes for these two possibilities are denoted byand
respectively. If the photons are indistinguishable, the two amplitudes interfere constructively to give a joint detection probability greater than that for two independent events. The sum over all possible pairs in the source washes out the interference unless the distance is sufficiently small.
Fano's explanation nicely illustrates the necessity of considering two-particle amplitudes, which are not as intuitive as the more familiar single-particle amplitudes used to interpret most interference effects. This may help to explain why some physicists in the 1950s had difficulty accepting the Hanbury Brown and Twiss result. But the quantum approach is more than just a fancy way to reproduce the classical result: if the photons are replaced by identical fermions such as electrons, the antisymmetry of wave functions under exchange of particles renders the interference destructive, leading to zero joint detection probability for small detector separations. This effect is referred to as antibunching of fermions . The above treatment also explains photon antibunching : if the source consists of a single atom, which can only emit one photon at a time, simultaneous detection in two closely spaced detectors is clearly impossible. Antibunching, whether of bosons or of fermions, has no classical wave analog.
From the point of view of the field of quantum optics, the HBT effect was important to lead physicists to apply quantum electrodynamics to new situations, many of which had never been experimentally studied, and in which classical and quantum predictions differ.