Auralization


Auralization is a procedure designed to model and simulate the experience of acoustic phenomena rendered as a soundfield in a virtualized space. This is useful in configuring the soundscape of architectural structures, concert venues, and public spaces, as well as in making coherent sound environments within virtual immersion systems.

History

The English term auralization was used for the first time by Kleiner et al. in an article in the journal of the AES en 1991.
The increase of computational power allowed the development of the first acoustic simulation software towards the end of the 1960s.

Principles

Auralizations are experienced through systems rendering virtual acoustic models made by convolving or mixing acoustic events recorded 'dry' projected within a virtual model of an acoustic space, the characteristics of which are determined by means of sampling its impulse response.
Once this has been determined, the simulation of the resulting soundfield in the target environment is obtained by convolution:
The resulting sound is heard as it would if emitted in that acoustic space.

Binaurality

For auralizations to be perceived as realistic, it is critical to emulate the human hearing in terms of position and orientation of the listener's head with respect to the sources of sound.
For IR data to be convolved convincingly, the acoustic events are captured using a dummy head where two microphones are positioned on each side of the head to record an emulation of sound arriving at the locations of human ears, or using an ambisonics microphone array and mixed down for binaurality.
Head-related transfer functions datasets can be used to simplify the process insofar as a monaural IR can be measured or simulated, then audio content is convolved with its target acoustic space. In rendering the experience, the transfer function corresponding to the orientation of the head is applied to simulate the corresponding spatial emanation of sound.