Research Themes

Immersive and Interactive Audio

Immersive audio systems are ubiquitous and range from macro-systems installed in cinemas, theatres and concert halls to micro-systems for domestic, in-car entertainment, VR/AR and mobile platforms. New developments in human-computer interaction, in particular head and body motion tracking and artificial intelligence, are paving the way for adaptive and intelligent audio technologies that promote audio personalisation and heightened levels of immersion.

Our research looks at how interactive audio systems can readily utilise non-tactile data such as listener location, orientation, gestural control and even biometric feedback such as heart rate or skin conductance, to intelligently adjust immersive sound output. This synergises with our spatial audio research on binaural measurement, analysis and modelling as well as Ambisonic decoding for immersive and interactive technologies. Such systems offer new creative possibilities for a diverse range of applications from virtual and augmented reality through to automotive audio and beyond.

Research lead: Gavin Kearney.

Environmental Soundscapes

Noise in our everyday environment can have a significant impact on our health, wellbeing, and productivity. Considerable effort is made to minimise these effects when designing buildings, developing major infrastructure projects or urban planning.

Our research in Environmental Soundscapes aims to develop auralisation solutions as part of the environmental acoustic design and evaluation process. We are developing methods appropriate to the modelling and simulation of sound propagation in large, open, outdoor spaces, and how we can more carefully assess our perception of these rendered soundscapes. Virtual Reality capture and rendering of sound environments, together with non-invasive biometric assessment of these immersive scenes are two areas of more recent work. Other research projects include automated identification of urban soundscapes, objective soundscapes, acoustic identification of vehicles and the development of biotelemetry systems.

Research lead: Damian Murphy.

Voice Science

The complexities of human voice production and perception, applied to speaking and singing, are at the centre of our work in this area. Applying engineering solutions and techniques, including algorithm design, MRI, and electrolaryngography, we investigate voice production, analysis and perception.

We place an emphasis on interdisciplinary approaches and collaboration to maximise impact in the areas of solo and ensemble singing performance in real and virtual environments, infant language development, vocal tract modelling for speech, and understanding and improving the health and wellbeing benefits of singing. We also host York Centre for Singing Science.

Research leads: Helena Daffern, Damian Murphy.

Health and Wellbeing

Across the University of York a key research theme is understanding how our research activities can have an impact on people’s health and wellbeing. For the AudioLab this includes an understanding of the positive impact that singing, particularly in communal setting, can have on physical and mental health. This theme is also relevant to the work done here at the AudioLab investigating environmental sound, including the impact of noise on sleep patterns, and the understanding of the emotional impact of exposure to environmental soundscapes.

Research leads: Helena Daffern, Damian Murphy.

Networked Music

Network music systems allow remote performers to connect over the internet and play music together in real-time using low-latency streaming methods. In this context, immersive audio is used to simulate shared virtual acoustic spaces, or ‘online rooms’, for networked performers to play music together in. Each musician hears themselves and other performers as if the ensemble was playing together in a real acoustic space. Effectively simulating the aural experience of playing together in a real space can improve the experience of networked musicians, and may perhaps be able to improve the objective quality of musical performance achieved using remote content-creation workflows. The project ‘Networked Musical Interactions in Shared Virtual Acoustic Spaces’ focuses on the evaluation of immersive audio rendering in the context of network music experiences, particularly practical use-case evaluations of systems used in current real-world applications. This evaluation informs the design of immersive audio and network music systems, providing the knowledge required to optimise performer experience, and the quality of delivered musical performances.

Research leads: Helena Daffern, Gavin Kearney.