Sound Interactions In The Metaverse

PhD Training Programme

The metaverse has been heralded as the third wave of media convergence, and where the real and virtual, physical and digital meet. Now a household term, it encompasses all aspects of extended reality (XR) and supporting technologies (e.g. 5G, AI and Machine Learning). Alongside entertainment, the metaverse is set to significantly impact global industries including healthcare, education and telecommunications. 

Centred around the development of immersive and interactive technologies, this PhD training programme will exploit York’s network of audio and related expertise around the metaverse, and collaborative interdisciplinary working. It will encompass a breadth of topics rooted in audio engineering, communications, computer science, gaming, music, arts, education, psychology, linguistics and health sciences.

SIM is an exciting opportunity for students to work in a collegiate research environment with world-leading academics, companies and third sector organisations to conduct challenge-led research around sound related metaverse technologies driven by the needs of stakeholders. We are looking for a diverse group of motivated candidates interested in building cross-disciplinary skills and expertise that links levels of research across technical development and testing, content design, application and user experience, and evaluation of broader impact on society. 

Why apply to SIM?

The University of York is committed to conducting research that positively impacts people’s lives, and providing equality of opportunity. York is currently holder of the Athena Swan Bronze Award for gender equality in higher education.

The SIM programme will train a cohort of PhD researchers in cross-disciplinary skills and expertise to lead the next generation of sound related metaverse technologies that are mutually beneficial to academia, industry and society. An initial cohort of five PhD students will be recruited in 2023, and will grow to eleven in 2024.

You will receive training in research methods, recording studio techniques for XR, immersive audio systems, responsible innovation and co-design, and how to engage users and the public with your work. In addition, you will develop a personal training plan with your supervisors with access to existing taught modules across the University to address any topical gaps skills as required.

You will also have opportunities to collaborate with other students and researchers, create XR experiences, and share your work through:

  • The annual SIM symposium, showcasing student research alongside industry and academic speakers; 
  • The annual Music Technology Symposium hosted in the School of Arts and Creative Technologies;
  • Student-led events such as research seminars, reading groups, film and gaming evenings, and workshops.

SIM will give you the opportunity to work directly with industry and third sector organisations. Placements, research exchanges, and other forms of knowledge exchange will give you the possibility to make a positive impact and collaborate with researchers and practitioners. You can find a list of current partners on the AudioLab website. 

These studentships cover the tuition fee at the home rate (£4,596 in 2022/23) and a stipend at the standard research council rate for a period of up to 3.5 years (£17,668 in 2022/23)

Call for applications for projects for September 2023

The below project descriptions outline project developed from challenges identified by our partners and indicate the overall topic of each PhD. Projects will be refined by the project student to suit their skills and interests in conjunction with the supervisor and partner:

PhD 1: Treating Auditory Hypersensitivity in Autistic Children using Virtual Ambisonics 

Partner: SONICAL 

Supervisors: Dan Johnston, School of Physics, Electronics, and Technology; Clare Fenton, COMIC (Child Oriented Mental Health innovation Collaboration) 

Project Description:

Currently in the UK, approximately 1 per 100 children are diagnosed with autism spectrum disorder (ASD).  Often they have difficulties in processing auditory sensory information and can experience hypersensitivity to common everyday sounds, manifesting via negative behavioural responses.  Without early and targeted therapy, this can lead to negative impacts of a child’s social development and ultimately their mental health,wellbeing and quality of life.

The aim of this project is to research and develop a system that utilises hearable technology to monitor the correlation between real-time auditory input and biometric data, with the purpose of detecting the emotional response to sounds within a listener’s local environment.  By measuring the electrodermal activity associated with stress, it will be possible to identify specific auditory stimuli which cause emotional distress which can help inform psychological interventions and manage a child’s auditory environment.  Furthermore, the system has the potential in the application of an intelligent noise control technique designed to manage real-world soundscapes.

If you have any questions about this project please contact

Apply for this project at PhD in MT

PhD 2: Effect of Contextual Sound Perception on Metaverse User Experience

Industry Partner: EA Games (SEED research group)

PhD Supervisor: Alena Denisova, Computer Science; second supervisor tbc. 


Sound provides contextual information and sets the atmosphere for the users of VR simulations, video games and the Metaverse. Sounds are typically created by the audio engineers or can be generated procedurally. Evaluating how the users perceive and experience sounds and assessing the quality of the generated sound (particularly if it is procedurally generated) and its suitability for a given context, event or virtual environment is a challenge due to the subjective nature of this assessment and the lack of benchmarks for evaluating such experiences. Thus, this project seeks to address the challenge of evaluating sound perception in the metaverse with a focus on how audio can be used to enhance the user experience, including but not limited to emotional engagement, immersion and presence. The goal is to grow our understanding of how sound perception affects the user experience in the metaverse and to develop and test new techniques and methods to assess sound perception in the metaverse.

If you have any questions about this project please contact

Apply for this project at PhD in CS

PhD 3:  Real-time for Acoustics Metaverse experiences

Industry Partner: Sony Interactive Entertainment

PhD Supervisor: Damian Murphy; second supervisor tbc.


Engaging metaverse experiences require significant realism in the acoustics of the virtual environments. Whether creating a plausible virtual reality world or matching real-world acoustics in an augmented reality scene, realistic acoustic rendering is critical to immersion and sense of presence. Modern game engines are starting to provide solutions for geometric acoustic modeling using approximation methods such as image source and ray tracing. Methods based on wave-based acoustic modelling are more physically accurate but have a significant computational overhead and are difficult to achieve for real-time interactive audio. This PhD project looks at developing appropriate strategies for combining techniques in real time, as well as optimisations that can be achieved through perceptual limitations of human spatial hearing.
The project is well suited to a student with strong acoustics knowledge and programming skills, looking to explore acoustic rendering using 3D game engines. The project is supported by Sony Interactive Entertainment, and will include an opportunity to develop the research through a placement at their office in London.

Skills required:
Strong knowledge on acoustics and audio propagation effects. Acquaintance with game development techniques using 3D game engines such as Unity or Unreal.


If you have any questions about this project please contact

Apply for this project at PhD in MT

Who should apply?

Candidates must have (or expect to obtain) a minimum of a UK upper second-class honours degree (2.1) or equivalent in Computer Science, Electronic Engineering, Music Technology or a related subject. Prior research or industry experience would also be an advantage. Candidates are expected to have a strong interest and experience in sound and audio technology, but may have formal training and qualifications from disciplines not directly associated with audio engineering or metaverse technologies. 

Successful applicants will be chosen based on their potential to do excellent research and contribute to our goal of contributing to the positive development of interactive sound technologies for a future metaverse that is beneficial to society. We especially welcome applications from candidates belonging to groups that are currently under-represented in metaverse-related industries; these include (but are not limited to) women, individuals from under-represented ethnicities, members of the LGBTQ+ community, people from low-income backgrounds, and people with physical disabilities.

The successful applicant should have a strong interest in sound, music and immersive audio technology and good programming skills. This project is highly multi-disciplinary in its nature and we welcome applicants from a broad range of core research backgrounds and interests, extending from audio signal processing and machine learning to user experience design, human-computer interaction, as well as relevant creative practice. 

How to apply

When applying please select ‘CDT in Sound Interactions in the Metaverse’. You do not need to provide a research proposal, just enter the name of the project you wish to apply for. The links to apply for each project can be found under the descriptions below.

You can contact us at

We expect strong competition for studentship places and encourage those interested to submit applications as early as possible. The deadline for applications is 12:00 noon (GMT) on Monday 20th March 2023.

In the meantime, if you have any questions, please contact Prof. Helena Daffern or Prof. Gavin Kearney.

Study for a PhD: