MASTER: Difference between revisions
No edit summary |
No edit summary |
||
| (7 intermediate revisions by the same user not shown) | |||
| Line 1: | Line 1: | ||
== MASTER Project == | |||
{| class='wikitable' style='margin:auto' | |||
|- | |- | ||
! CORDIS Reference !! Start date !! End date !! Coordinator | ! CORDIS Reference !! Start date !! End date !! Coordinator !! Project website | ||
|- | |- | ||
| https://cordis.europa.eu/project/id/101093079 || 01/01/2023 || 30/06/2026 || PANEPISTIMIO PATRON" | | https://cordis.europa.eu/project/id/101093079 || 01/01/2023 || 30/06/2026 || PANEPISTIMIO PATRON || https://www.master-xr.eu/project/ | ||
|} | |||
=== Project description === | |||
The transition to Industry 4.0 required the adoption of new robotic and automation tools in its processes. The adoption of such tools requires that workers understand how to use and take advantage of them. Additionally, extended reality (XR) technologies have reached sufficient maturity to enter the domain of industrial applications and, among other things, support the training of operators. The EU-funded MASTER project will use XR tools to address the education challenges and create new training material to help operators learn and adapt to the new automation tools, boosting the XR ecosystem's capacity for training in robotics in manufacturing. The project will launch two Open Calls; the first will provide the XR platform with the necessary training tools and features to be enhanced by selected companies, while the second will test first-hand the platform and tools by creating training material | |||
=== Project outputs === | |||
==== Publications ==== | |||
{| class="wikitable sortable" | |||
! Domain !! Type of output !! Title !! DOI URL | |||
|- | |||
| Computer Vision, 3D Modeling & Rendering || Peer reviewed articles || Cross-Viewpoint Semantic Mapping: Integrating Human and Robot Perspectives for Improved 3D Semantic Reconstruction || https://doi.org/10.3390/s23115126 | |||
|- | |||
| Extended Reality (VR/AR/MR) & HCI || Peer reviewed articles || A review of machine learning in scanpath analysis for passive gaze-based interaction || https://doi.org/10.3389/frai.2024.1391745 | |||
|- | |||
| Healthcare, Medicine & Accessibility || Conference proceedings || Interactive Fixation-to-AOI Mapping for Mobile Eye Tracking Data based on Few-Shot Image Classification || https://doi.org/10.1145/3581754.3584179 | |||
|- | |||
| Healthcare, Medicine & Accessibility || Conference proceedings || IMETA: An Interactive Mobile Eye Tracking Annotation Method for Semi-automatic Fixation-to-AOI mapping || https://doi.org/10.1145/3581754.3584125 | |||
|} | |||
==== Technological assets ==== | |||
{| class="wikitable sortable" | |||
! Title !! Type of Asset !! Link / DOI !! Description | |||
|- | |||
| IMETA || Software Tool || https://doi.org/10.1145/3581754.3584125 || An interactive mobile eye-tracking annotation method for semi-automatic fixation-to-AOI mapping. | |||
|} | |||
Latest revision as of 12:48, 22 April 2026
MASTER Project
| CORDIS Reference | Start date | End date | Coordinator | Project website |
|---|---|---|---|---|
| https://cordis.europa.eu/project/id/101093079 | 01/01/2023 | 30/06/2026 | PANEPISTIMIO PATRON | https://www.master-xr.eu/project/ |
Project description
The transition to Industry 4.0 required the adoption of new robotic and automation tools in its processes. The adoption of such tools requires that workers understand how to use and take advantage of them. Additionally, extended reality (XR) technologies have reached sufficient maturity to enter the domain of industrial applications and, among other things, support the training of operators. The EU-funded MASTER project will use XR tools to address the education challenges and create new training material to help operators learn and adapt to the new automation tools, boosting the XR ecosystem's capacity for training in robotics in manufacturing. The project will launch two Open Calls; the first will provide the XR platform with the necessary training tools and features to be enhanced by selected companies, while the second will test first-hand the platform and tools by creating training material
Project outputs
Publications
| Domain | Type of output | Title | DOI URL |
|---|---|---|---|
| Computer Vision, 3D Modeling & Rendering | Peer reviewed articles | Cross-Viewpoint Semantic Mapping: Integrating Human and Robot Perspectives for Improved 3D Semantic Reconstruction | https://doi.org/10.3390/s23115126 |
| Extended Reality (VR/AR/MR) & HCI | Peer reviewed articles | A review of machine learning in scanpath analysis for passive gaze-based interaction | https://doi.org/10.3389/frai.2024.1391745 |
| Healthcare, Medicine & Accessibility | Conference proceedings | Interactive Fixation-to-AOI Mapping for Mobile Eye Tracking Data based on Few-Shot Image Classification | https://doi.org/10.1145/3581754.3584179 |
| Healthcare, Medicine & Accessibility | Conference proceedings | IMETA: An Interactive Mobile Eye Tracking Annotation Method for Semi-automatic Fixation-to-AOI mapping | https://doi.org/10.1145/3581754.3584125 |
Technological assets
| Title | Type of Asset | Link / DOI | Description |
|---|---|---|---|
| IMETA | Software Tool | https://doi.org/10.1145/3581754.3584125 | An interactive mobile eye-tracking annotation method for semi-automatic fixation-to-AOI mapping. |