Jump to content

DIDYMOS-XR: Difference between revisions

From OpenVerse Wiki
Created page with "=== DIDYMOS-XR Project === {| class='wikitable' style='margin:auto' |- ! CORDIS Reference !! Start date !! End date !! Coordinator |- | https://cordis.europa.eu/project/id/101092875 || 01/01/2023 || 31/12/2025 || JOANNEUM / Austria |} === Project description === The digital transformation and the availability of more diversified and cost-effective means for 3D capture have led to the creation of digital twins also for physical environments. Based on such digital twin..."
 
No edit summary
Line 2: Line 2:
{| class='wikitable' style='margin:auto'
{| class='wikitable' style='margin:auto'
|-
|-
! CORDIS Reference !! Start date !! End date !! Coordinator
! CORDIS Reference !! Start date !! End date !! Coordinator !! Project website
|-  
|-  
| https://cordis.europa.eu/project/id/101092875 || 01/01/2023 || 31/12/2025 || JOANNEUM  / Austria
| https://cordis.europa.eu/project/id/101092875 || 01/01/2023 || 31/12/2025 || JOANNEUM  / Austria || https://didymos-xr.eu/
|}  
|}  
=== Project description ===
=== Project description ===
The digital transformation and the availability of more diversified and cost-effective means for 3D capture have led to the creation of digital twins also for physical environments. Based on such digital twins, various applications could be built using real-time data from real-world environments, serving as a blueprint for smart cities and for improving performance and efficiency across industries. Currently, creating high-fidelity digital twins is costly, and their update requires manual intervention. Furthermore, data integration from heterogeneous sensors is challenging. The EU-funded DIDYMOS-XR project will implement technology to create improved large-scale digital twins, synchronised with the real world. DIDYMOS-XR will investigate and develop methods for data reconstruction and mapping from heterogeneous inputs, including static and mobile sensors, AI-based data fusion, scene understanding and rendering.
The digital transformation and the availability of more diversified and cost-effective means for 3D capture have led to the creation of digital twins also for physical environments. Based on such digital twins, various applications could be built using real-time data from real-world environments, serving as a blueprint for smart cities and for improving performance and efficiency across industries. Currently, creating high-fidelity digital twins is costly, and their update requires manual intervention. Furthermore, data integration from heterogeneous sensors is challenging. The EU-funded DIDYMOS-XR project will implement technology to create improved large-scale digital twins, synchronised with the real world. DIDYMOS-XR will investigate and develop methods for data reconstruction and mapping from heterogeneous inputs, including static and mobile sensors, AI-based data fusion, scene understanding and rendering.

Revision as of 08:58, 22 April 2026

DIDYMOS-XR Project

CORDIS Reference Start date End date Coordinator Project website
https://cordis.europa.eu/project/id/101092875 01/01/2023 31/12/2025 JOANNEUM / Austria https://didymos-xr.eu/

Project description

The digital transformation and the availability of more diversified and cost-effective means for 3D capture have led to the creation of digital twins also for physical environments. Based on such digital twins, various applications could be built using real-time data from real-world environments, serving as a blueprint for smart cities and for improving performance and efficiency across industries. Currently, creating high-fidelity digital twins is costly, and their update requires manual intervention. Furthermore, data integration from heterogeneous sensors is challenging. The EU-funded DIDYMOS-XR project will implement technology to create improved large-scale digital twins, synchronised with the real world. DIDYMOS-XR will investigate and develop methods for data reconstruction and mapping from heterogeneous inputs, including static and mobile sensors, AI-based data fusion, scene understanding and rendering.