Immersive and Interactive Audio
Immersive audio systems are ubiquitous and range from macro-systems installed in cinemas, theatres and concert halls to micro-systems for domestic, in-car entertainment, VR/AR and mobile platforms. New developments in human-computer interaction, in particular head and body motion tracking and artificial intelligence, are paving the way for adaptive and intelligent audio technologies that promote audio personalisation and heightened levels of immersion.
Our research looks at how interactive audio systems can readily utilise non-tactile data such as listener location, orientation, gestural control and even biometric feedback such as heart rate or skin conductance, to intelligently adjust immersive sound output. This synergises with our spatial audio research on binaural measurement, analysis and modelling as well as Ambisonic decoding for immersive and interactive technologies. Such systems offer new creative possibilities for a diverse range of applications from virtual and augmented reality through to automotive audio and beyond.
Research lead: Gavin Kearney.
Noise in our everyday environment can have a significant impact on our health, wellbeing, and productivity. Considerable effort is made to minimise these effects when designing buildings, developing major infrastructure projects or urban planning.
Our research in Environmental Soundscapes aims to develop auralisation solutions as part of the environmental acoustic design and evaluation process. We are developing methods appropriate to the modelling and simulation of sound propagation in large, open, outdoor spaces, and how we can more carefully assess our perception of these rendered soundscapes. Virtual Reality capture and rendering of sound environments, together with non-invasive biometric assessment of these immersive scenes are two areas of more recent work. Other research projects include automated identification of urban soundscapes, objective soundscapes, acoustic identification of vehicles and the development of biotelemetry systems.
Research lead: Damian Murphy.
The complexities of human voice production and perception, applied to speaking and singing, are at the centre of our work in this area. Applying engineering solutions and techniques, including algorithm design, MRI, and electrolaryngography, we investigate voice production, analysis and perception.
We place an emphasis on interdisciplinary approaches and collaboration to maximise impact in the areas of solo and ensemble singing performance in real and virtual environments, infant language development, vocal tract modelling for speech, and understanding and improving the health and wellbeing benefits of singing. We also host York Centre for Singing Science.
Health and Wellbeing
Across the University of York a key research theme is understanding how our research activities can have an impact on people’s health and wellbeing. For the AudioLab this includes an understanding of the positive impact that singing, particularly in communal setting, can have on physical and mental health. This theme is also relevant to the work done here at the AudioLab investigating environmental sound, including the impact of noise on sleep patterns, and the understanding of the emotional impact of exposure to environmental soundscapes.