Research Project
SOCIAL PIA: Cooperative Social PIA model for Cybernetics Avatars (Moonshot Research and Development Program)
Type
UPC Project
Start Date
01/04/2023
End Date
30/11/2025
Project Code
JPMJMS2011-85

Staff
-
-
Bo, Valerio
Researcher
-
Santamaria, Angel
Researcher
-
Grosch, Patrick John
Researcher
-
Puig-Pey, Ana Maria
Researcher
-
Garrell, Anaís
Researcher
-
Dalmasso, Marc
PhD Student
-
Hriscu, Lavinia Beatrice
PhD Student
-
Domínguez, José Enrique
PhD Student
-
Bejarano, Edison Jair
PhD Student
-
Herrero, Fernando
Support
-
Gil, Oscar
Member
-
Laplaza, Javier
Member
Project Description
The SOCIAL-PIA (Cooperative Social PIA, Perception-Intention-Action, model for Cybernetics Avatars), is a project funded by JST (Japan Science and Technology Agency), under the Moonshot Goal 1 “Realization of a society in which human beings can be free from limitations of body, brain, space and time by 2050” (Prof. Norihiro Hagita) and in the Moonshot project “The Realization of an avatar-Symbiotic Society where Everyone can Perform Active Roles without Constraint” (Prof. Hiroshi Ishiguro).
The aim of the project is to develop the cooperation between humans (with the operator and the end-user or bystander) and Cybernetic Avatars (CA) through the PIA paradigm (Perception-Intention-Action). The project proposes to conduct research on human intention and how it relates to the Situation Awareness and Decision-Making PIA processes, with two goals in mind: to assist the operator in handling Interactive/Cooperative tasks, and to improve the autonomous CA interaction/cooperation between CAs and end-users/bystanders. The PIA paradigm will allow to anticipate human action, be pro-active or follow a preestablished plan. The project will be demonstrated in interactive and cooperative telepresence tasks, to study the differences among different cultures: European vs. Japanese.
Project Publications
Journal Publications
-
M. Dalmasso, J.E. Domínguez, I.J. Torres, P. Jiménez, A. Garrell Zulueta and A. Sanfeliu. Shared task representation for human–robot collaborative navigation: The collaborative search case. International Journal of Social Robotics, 16: 145-171, 2024.
Abstract
Info
PDF
-
J.E. Domínguez, N.A. Rodríguez and A. Sanfeliu. Perception–intention–action cycle in human–robot collaborative tasks: The collaborative lightweight object transportation use-case. International Journal of Social Robotics, 2024, to appear.
Abstract
Info
PDF
-
O. Gil and A. Sanfeliu. Human-robot collaborative minimum time search through sub-priors in ant colony optimization. IEEE Robotics and Automation Letters, 9(11): 10216-10223, 2024.
Abstract
Info
PDF
-
J. Laplaza, F. Moreno-Noguer and A. Sanfeliu. Enhancing robotic collaborative tasks through contextual human motion prediction and intention inference. International Journal of Social Robotics: 1-20, 2024, to appear.
Abstract
Info
PDF
Conference Publications
-
F. Gebelli, L.B. Hriscu, R. Ros, S. Lemaignan, A. Sanfeliu and A. Garrell Zulueta. Personalised explainable robots using LLMs, 2025 ACM/IEEE International Conference on Human-Robot Interaction, 2025, Melbourne, Australia, pp. 1304-1308.
Abstract
Info
PDF
-
J.E. Domínguez and A. Sanfeliu. Voice Command Recognition for Explicit Intent Elicitation in Collaborative Object Transportation Tasks: a ROS-based Implementation, 2024 ACM/IEEE International Conference on Human-Robot Interaction, 2024, Boulder, CO, USA, pp. 412–416.
Abstract
Info
PDF
-
M. Dalmasso, V. Sanchez-Anguix, A. Garrell Zulueta, P. Jiménez and A. Sanfeliu. Exploring Preferences in Human-Robot Navigation Plan Proposal Representation, 2024 ACM/IEEE International Conference on Human-Robot Interaction, 2024, Boulder, CO, USA, pp. 369-373.
Abstract
Info
PDF
-
J.E. Domínguez and A. Sanfeliu. Exploring transformers and visual transformers for force prediction in human-robot collaborative transportation tasks, 2024 IEEE International Conference on Robotics and Automation, 2024, Yokohama (Japan), pp. 3191-3197.
Abstract
Info
PDF
-
J.E. Domínguez and A. Sanfeliu. Anticipation and proactivity. Unraveling both concepts in human-robot interaction through a handover example, 33rd IEEE International Symposium on Robot and Human Interactive Communication, 2024, Pasadena, California, USA, pp. 957-962, IEEE.
Abstract
Info
PDF
-
J.E. Domínguez and A. Sanfeliu. Force and velocity prediction in human-robot collaborative transportation tasks through video retentive networks, 2024 IEEE/RSJ International Conference on Intelligent Robots and Systems, 2024, Abu Dhabi, UAE, pp. 9307-9313.
Abstract
Info
PDF
-
J. Laplaza, J.J. Oliver, A. Sanfeliu and A. Garrell Zulueta. Body gestures recognition for social human-robot interaction, 7th Iberian Robotics Conference, 2024, Madrid, Spain.
Abstract
Info
PDF
-
S.H. Seo, D.J. Rea, K. Kochigami, T. Kanda, J.E. Young, Y. Nakano, A. Sanfeliu and H. Ishiguro. Symbiotic society with avatars (SSA): Toward empowering social interactions beyond space and time, 2024 ACM/IEEE International Conference on Human-Robot Interaction, 2024, Boulder, CO, USA, pp. 1352-1354.
Abstract
Info
PDF
-
E. Repiso, A. Garrell Zulueta and A. Sanfeliu. Real-life experiment metrics for evaluating human-robot collaborative navigation tasks, 32nd IEEE International Symposium on Robot and Human Interactive Communication, 2023, Busan, Korea, pp. 660-667.
Abstract
Info
PDF
-
O. Gil and A. Sanfeliu. Human motion trajectory prediction using the Social Force Model for real-time and low computational cost applications, 6th Iberian Robotics Conference, 2023, Coimbra, Portugal, pp. 235–247, Springer.
Abstract
Info
PDF
-
J.E. Domínguez and A. Sanfeliu. Inference vs. explicitness. Do we really need the perfect predictor? The human-robot collaborative object transportation case, 32nd IEEE International Symposium on Robot and Human Interactive Communication, 2023, Busan, Korea, pp. 1866-1871.
Abstract
Info
PDF
-
J.E. Domínguez and A. Sanfeliu. Improving human-robot interaction effectiveness in human-robot collaborative object transportation using force prediction, 2023 IEEE/RSJ International Conference on Intelligent Robots and Systems, 2023, Detroit, MI, USA, pp. 7839-7845.
Abstract
Info
PDF
Follow us!