The visual system provides high-resolution, 3-dimensional images of
distant objects, but has several limitations. Vision requires a direct line of
sight and sufficient ambient light to illuminate the target. Furthermore, our
eyes can only view a small fraction of the environment at one time.
Fortunately, our auditory system compensates for the inadequacies of vision.
Sound diffuses more readily than light, so we can hear sounds from sources
which are hidden to the eye. Hearing does not require illumination, so we can
hear in the dark.
The ability to identify the location of a sound source is called
auditory localization and technologies that replicate the sounds from a real
environment through an artificially created environment based on human¡¯s
auditory localization are known as the virtual auditory display (VAD) or sometimes,
3D sound technology. 3D sound technologies are developed for two-channel or
multi-channel audio systems to provide a full, immersive sound field with
directional cues that mimic the listener¡¯s own auditory localization cues.
We are studying intensively these 3D sound technologies such as
characteristics of HRTFs, HRTF customization, HRTF interpolation, two-channel
or multi-channel audio systems, and so on.