PRESENCE Project
Project description
The concept of presence can be understood as a synthesis of interrelated psychophysical ingredients where multiple perceptual dimensions intervene. A better understanding on how specific aspects, such as plausibility (the illusion that virtual events are really happening), co-presence (the illusion of being with others), or place illusion (the feeling of being there) impact XR experiences is key to improve their quality. The availability and performance of current technologies do not reach high levels of presence in XR, which is essential to get us closer than ever to the perennial VR dream: to be anywhere, doing anything, together with others, from any place. PRESENCE will impact multiple dimensions of presence in physical-digital worlds, addressing three main challenges: i) how to create realistic visual interactions among remote humans, delivering high-end holoportation based on live volumetric capturing, compression and optimization techniques, under heterogeneous computation and network conditions; ii) how to provide realistic touch among remote users and synthetic objects, developing novel haptic systems and enabling spatial multi device synchronisation in multi user scenarios; iii) how to produce realistic social interactions among avatars and agents, generating AI virtual humans, representing actual users or AI agents. PRESENCE will ensure the future uptake of research results following a threefold evaluation method: 1) each technology will be independently evaluated to understand its impact on the illusion of presence; 2) each component will be evaluated by the integration team, providing scientific and technical feedback in order to facilitate their use in each project iteration as well as beyond the project scope, towards technology transfer and exploitation; 3) all components will be integrated in two demonstrators (professional and social setups), following a human-centred design approach and ultimately evaluating the user experience.
Project outputs
Publications
| Domain |
Type of output |
Title |
DOI URL
|
| AI, Machine Learning & Data Science |
Conference proceedings |
I Hear, See, Speak & Do: Bringing Multimodal Information Processing to Intelligent Virtual Agents for Natural Human-AI Communication |
https://doi.org/10.1109/VRW66409.2025.00469
|
| Computer Vision, 3D Modeling & Rendering |
Conference proceedings |
A flexible toolkit for real-time action recognition of virtual humans in XR/AR environments |
https://doi.org/10.5281/ZENODO.15974094
|
| Computer Vision, 3D Modeling & Rendering |
Conference proceedings |
LiveSkeleton: High-Quality Real-Time Human Tracking and Pose Estimation |
https://doi.org/10.1109/ISM63611.2024.00054
|
| Ethics, Society, Arts & Culture |
Peer reviewed articles |
Immersive documentary journalism: exploring the impact of 360° virtual reality compared with a 2D screen display on the responses of people toward undocumented young migrants to Spain |
https://doi.org/10.3389/FCOMM.2024.1474524
|
| Extended Reality (VR/AR/MR) & HCI |
Conference proceedings |
Methodological Reflections on Early-Stage Requirement Gathering and Prioritization For Immersive Extended Reality Applications |
https://doi.org/10.5753/IMXW.2025.7940
|
| Extended Reality (VR/AR/MR) & HCI |
Conference proceedings |
A Toolkit for Creating Intelligent Virtual Humans in Extended Reality |
https://doi.org/10.1109/VRW66409.2025.00149
|
| Extended Reality (VR/AR/MR) & HCI |
Peer reviewed articles |
Confusing virtual reality with reality – An experimental study |
https://doi.org/10.1016/J.ISCI.2025.112655
|
| Robotics, Manufacturing & Industry 4.0 |
Peer reviewed articles |
The Role of Sensorimotor Contingencies and Eye Scanpath Entropy in Presence in Virtual Reality: a Reinforcement Learning Paradigm |
https://doi.org/10.1109/TVCG.2025.3547241
|
Technological assets
| Title |
Type of Asset |
Link / DOI |
Description
|
| LiveSkeleton |
Software / Algorithm |
https://doi.org/10.1109/ISM63611.2024.00054 |
A system providing high-quality real-time human tracking and pose estimation.
|
| Flexible toolkit for real-time action recognition |
Toolkit / Software |
https://doi.org/10.5281/ZENODO.15974094 |
A flexible, reusable toolkit for the real-time action recognition of virtual humans in XR/AR environments.
|