More and more effort is being made to characterize the soundscapes that surround us so that we can design more attractive and immersive experiences. This review article focuses on the challenges and opportunities surrounding the perception of sound, with a particular focus on the perception of spatial sound in a virtual reality (VR) urban landscape. We review how research on temporal aspects has recently been extended to the evaluation of spatial factors when designing soundscapes. In particular, we analyze the key findings on the human capacity to locate and distinguish spatial sound signals for different technical configurations.
We highlight studies carried out in real and virtual world environments to evaluate the perception of spatial sound. We conclude this review by highlighting the opportunities offered by virtual reality technology and the outstanding questions for designers of virtual soundscapes, especially with advances in spatial sound stimulation. If the participant reached the correct speaker, the sound would stop, which promoted a sense of agency over the hearing change. C and D) Absolute errors depending on the attempt to name (black) and Reaching (gray) of the group when the target sound came from the left (plugged side) (C) and right (D) side of the space.
They demonstrated that adding only the sound of vegetation had a marginal effect, but that there was an enormous positive effect on both the pleasant and occurrence ratings of the soundscape, due to the addition of sound from water fountains or the addition of a combination of visual and sound aspects of aquatic fountains. We analyzed the literature on the spatial perception of sound and, in particular, studies that use fixed speaker sources or fixed headphone sources (that is, this allows total control over visual signals over the visual environment (in this case, an empty room), signals over the position of the sound (here, a visible array of virtual speakers) and feedback (here, the blinking speaker that appeared in case of erroneous answers). Absolute errors (calculated in each trial as the difference in degrees between the real and reported azimuthal position of the sound) were introduced in an analysis of variance (ANOVA) with the OBJECTIVE POSITION (17 positions, from −80° to +80°) and the LISTENING CONDITION (B binaural, M monaural as variables within the participants and GROUP (reach, name) as a variable between the participants. Compared to the static listening condition, active listening improved sound localization even over a very short period of time, which confirms the importance of head movements in relearning the correspondences between sound and space.
These findings highlight the benefits of using sound as an active listening approach to relearning the correspondences between sound and space. According to the END, unwanted sound, which has been a passively accepted aspect in Western societies since the Industrial Revolution, now had to be actively managed, even outside the workplace, to improve the well-being of citizens. As shown in Table 1, assumptions about the listener, the number of virtual objects that produce the sound, and the amount of training received are also crucial. These authors created a virtual reproduction of a Neapolitan square (Piazza Vittoria) using images captured in 360° (transmitted through a virtual reality headset) and sound recordings (for example, all of these are examples of “active listening”, a way of listening in which people can freely move their heads and bodies to interact with sound sources), which constitutes a distinctive feature of all everyday hearing conditions), which constitutes a distinctive feature of all everyday hearing conditions.
Equally important is understanding the positive role of virtual reality training, which to date has not been discussed in real evaluations of the soundscape. .