The Open Binaural Renderer for Eclipsa Audio

, ,

Eclipsa Audio and IAMF

Eclipsa Audio is based on the Immersive Audio Model and Formats (IAMF) specification (1), which is developed by Google, Samsung, and other key contributors within the Alliance for Open Media (AOM), of which the University of York is a part. It is released under AOM’s royalty-free license. This open-source approach represents more than just technological innovation – it broadens opportunities for creators, enabling immersive audio to be integrated into projects of all scales, including those that may not have access to traditional commercial toolchains. Eclipsa Audio unifies existing paradigms within a flexible, codec-agnostic container, enabling productions that combine channel beds and Ambisonic elements. This approach ensures compatibility with existing standards while extending functionality and maintaining high audio quality across diverse delivery platforms.

Open Binaural Renderer

The Open Binaural Renderer (OBR) is an open-source, community-driven project that originated from the University of York’s AudioLab. Developed specifically for the Eclipsa Audio / IAMF immersive audio format, OBR can deliver a high-quality spatial audio experience over headphones. Unlike proprietary “black box” systems, OBR is built on a foundation of scientific rigour and creative practice, allowing developers, researchers, and creators to understand, evaluate, and contribute to its evolution.

The Challenge: Why Binaural Rendering is Difficult

The perception of binaural audio is multidimensional and complex. By default, binaural rendering introduces spectral changes to the original signal, affecting the timbre and plausible translation from a multichannel loudspeaker playback system. Non-personalised solutions compromise accuracy and precision of binaural reproduction, leading to a collapse of externalisation, increased front-back confusion, an excessively wide sound stage, and a loss of distance perception (4). Renderers that introduce a fixed virtual listening room response often suffer from the room divergence problem (5).

Our Approach: Rigorous, Iterative Improvement

Taking these inherent problems of binaural reproduction into account, we are committed to systematic improvement through rigorous testing and creative refinement of the OBR rendering algorithms. As part of our recent pilot study, we have established a comprehensive evaluation framework combining objective measurements with perceptual testing. The study compared the initial OBR implementation against leading binaural renderers. The objective testing included an investigation of parameters like frequency response and spatial cues, while the listening tests helped to uncover listener preferences across diverse content types – from orchestral recordings to podcasts. The results revealed both technical opportunities (like addressing direct and reflected sound interference issues) and creative insights (such as the content-dependent nature of listening preferences). These findings directly informed the next iteration of OBR, and although the full report is currently in the peer-review process, you can listen to the binaural audio rendered using the refined version of OBR below.

Audio Samples

0:00 / 0:00
🔊

Additionally, you can explore the Stellar track in the tutorial below, which provides an in-depth dive into immersive composition and music production using Eclipsa Audio plugins.

Because OBR is an open-source project, its improvement will benefit the entire community. The testing frameworks are documented and repeatable, providing tools for ongoing evaluation as the renderer evolves. Real-world feedback from creators informs future refinements. This iterative approach – measure, listen, refine, validate – means that the OBR doesn’t just exist in a finished form. It’s a living project that improves systematically over time, shaped by both scientific rigor and creative practice.

References

  1. https://aomediacodec.github.io/iamf/
  2. https://github.com/google/obr/
  3. On the Design of the Binaural Rendering Library for Eclipsa Audio Immersive Audio Container, Tomasz Rudzki, Gavin Kearney, Jan Skoglund, 158th Convention of the Audio Engineering Society, Warsaw, Poland
  4. Challenges in binaural conversion of multichannel electroacoustic compositions, Gille Jakob, Proceedings of the 22nd Sound and Music Computing Conference (SMC2025), Graz, July 2025
  5. Towards determining thresholds for room divergence: A pilot study on perceived externalization, Gari, Sebastia V Amengual, Henrik G Hassager, Florian Klein, Johannes M Arend, Philip W Robinson, 2021 Immersive and 3D Audio: from Architecture to Automotive (I3DA) 2021
  • https://aomediacodec.github.io/iamf – IAMF specification
  • https://github.com/AOMediaCodec/iamf-tools – encoding / decoding
  • https://github.com/AOMediaCodec/libiamf – reference decoder
  • https://github.com/google/obr – Open Binaural Renderer
  • https://github.com/trsonic/obr-plugin – Open Binaural Renderer VST wrapper
  • https://github.com/google/eclipsa-audio-plugin – Eclipsa Audio Plugins source code
  • https://www.eclipsaapp.com – compiled Eclipsa Audio Plugins
  • Encoding to IAMF using ffmpeg
  • The Team

    Katia Sochaczewska joined AudioLab in December 2024, to undertake the evaluation of the OBR renderer. Her work focuses on establishing and refining objective and perceptual testing frameworks, creating spatial audio test content and managing the listening tests. She also supports the iterative refinement of the renderer itself.

    Tomasz Rudzki joined AudioLab in 2018 to pursue his PhD under the supervision of Prof. Gavin Kearney. While his work at the lab focused initially on the evaluation of low-bitrate compression of Ambisonics, it later gravitated towards perceptually-optimised binaural rendering methods. The research conducted during that period laid the foundations for the Open Binaural Renderer (OBR). Tomasz remains the main contributor to the OBR project, which is supported by Google.

    Gavin Kearney is a Professor of Audio Engineering and leads the Immersive Audio research team at York AudioLab. He provides strategic and technical direction for the OBR evaluation and refinement project. His vision was essential in establishing the renderer development, and he continues to steer its evaluation strategy and alignment with wider immersive-audio research at York and within the Eclipsa Audio ecosystem.

    The AudioLab team is supported by the members of the Google Eclipsa Audio team, including Jan Skoglund, Felicia Lim, Yero Yeh and Jani Huoponen.