Jump to content

SONICOM: Difference between revisions

From OpenVerse Wiki
Created page with "=== SONICOM Project === {| class='wikitable' style='margin:auto' |- ! CORDIS Reference !! Start date !! End date !! Coordinator |- | https://cordis.europa.eu/project/id/101017743 || 01/01/2021 || 30/06/2026 || IMPERIAL COLLEGE OF SCIENCE TECHNOLOGY AND MEDICINE / UK |} === Project description === Sound is an integral part of the human experience. As one of the most important ways of sensing and interacting with our environment, sound plays a major role in shaping how..."
 
No edit summary
 
Line 8: Line 8:
=== Project description ===
=== Project description ===
Sound is an integral part of the human experience. As one of the most important ways of sensing and interacting with our environment, sound plays a major role in shaping how the world is perceived. In virtual or augmented reality (VR/AR), simulating spatially correct audio is of vital importance to delivering an immersive virtual experience. However, acoustic VR/AR presents many challenges. Using the power of artificial intelligence, the EU-funded SONICOM project aims to deliver the next milestone in immersive audio simulation. The goal is to design the next generation of 3D audio technologies, provide tailored audio solutions and significantly improve how we interact with the virtual world.
Sound is an integral part of the human experience. As one of the most important ways of sensing and interacting with our environment, sound plays a major role in shaping how the world is perceived. In virtual or augmented reality (VR/AR), simulating spatially correct audio is of vital importance to delivering an immersive virtual experience. However, acoustic VR/AR presents many challenges. Using the power of artificial intelligence, the EU-funded SONICOM project aims to deliver the next milestone in immersive audio simulation. The goal is to design the next generation of 3D audio technologies, provide tailored audio solutions and significantly improve how we interact with the virtual world.
=== Project outputs ===
==== Technological assets ====
{| class="wikitable sortable"
! Title !! Type of Asset !! Link / DOI !! Description
|-
| The SONICOM HRTF Dataset || Dataset || https://doi.org/10.17743/jaes.2022.0066 || Dataset of Head-Related Transfer Functions for artificial intelligence-driven immersive audio.
|-
| PAN-AR || Dataset || https://doi.org/10.1145/3678299.3678332 || A multimodal dataset featuring higher-order ambisonics room impulse responses and spherical pictures.
|-
| NumCalc || Open-Source Software || https://doi.org/10.1016/j.enganabound.2024.01.008 || An open-source Boundary Element Method (BEM) code for solving acoustic scattering problems.
|-
| Auditory modelling toolbox (AMT) || Software || https://ecosystem.sonicom.eu/tools/1 || Toolbox to facilitate reproducible research in auditory modeling.
|-
| Frambi || Software Framework || https://doi.org/10.61782/fa.2023.0494 || A flexible software framework tailored for auditory modeling based on Bayesian inference.
|}

Latest revision as of 13:34, 22 April 2026

SONICOM Project

CORDIS Reference Start date End date Coordinator
https://cordis.europa.eu/project/id/101017743 01/01/2021 30/06/2026 IMPERIAL COLLEGE OF SCIENCE TECHNOLOGY AND MEDICINE / UK

Project description

Sound is an integral part of the human experience. As one of the most important ways of sensing and interacting with our environment, sound plays a major role in shaping how the world is perceived. In virtual or augmented reality (VR/AR), simulating spatially correct audio is of vital importance to delivering an immersive virtual experience. However, acoustic VR/AR presents many challenges. Using the power of artificial intelligence, the EU-funded SONICOM project aims to deliver the next milestone in immersive audio simulation. The goal is to design the next generation of 3D audio technologies, provide tailored audio solutions and significantly improve how we interact with the virtual world.

Project outputs

Technological assets

Title Type of Asset Link / DOI Description
The SONICOM HRTF Dataset Dataset https://doi.org/10.17743/jaes.2022.0066 Dataset of Head-Related Transfer Functions for artificial intelligence-driven immersive audio.
PAN-AR Dataset https://doi.org/10.1145/3678299.3678332 A multimodal dataset featuring higher-order ambisonics room impulse responses and spherical pictures.
NumCalc Open-Source Software https://doi.org/10.1016/j.enganabound.2024.01.008 An open-source Boundary Element Method (BEM) code for solving acoustic scattering problems.
Auditory modelling toolbox (AMT) Software https://ecosystem.sonicom.eu/tools/1 Toolbox to facilitate reproducible research in auditory modeling.
Frambi Software Framework https://doi.org/10.61782/fa.2023.0494 A flexible software framework tailored for auditory modeling based on Bayesian inference.