Publication
Trust, acceptance and social cues in human-robot interaction (SCRITA)(Editorial)
Journal Article (2024)
Journal
International Journal of Social Robotics
Pages
1047-1048
Volume
16
Doc link
http://dx.doi.org/10.1007/s12369-024-01154-w
File
Authors
-
Rossi, Alessandra
-
Holthaus, Patrick
-
Moros, Silvia
-
Lakatos, Gabriella
-
Andriella, Antonio
-
Scheunemann, Marcus
-
Van Maris, Anouk
Abstract
Trust is a fundamental aspect that helps to foster effective collaboration between people and robots. It is imperative that people trust robots to not create a hazardous situation, such as starting a fire when trying to make a cup of tea or giving the wrong medicine to a vulnerable person. Likewise, people should be able to trust robots not to create an unsafe situation, such as leaving the door open unattended or providing personal information to strangers - and potentially to thieves. Trust, however, is a complex feeling and it can be affected by several factors that depend on the human, the robot and context of the interaction. Trust might hinder a robot’s assistance or lead to a loss of interest in robots after the novelty effect fades. Unreasonable over-trust in a robot’s capabilities could even have fatal consequences. It is therefore important to design and develop mechanisms to increase and mitigate people’s trust in service and assistive robots. A positive and balanced trust, indeed, is fundamental for building a high-quality interaction. Similarly, socially aware robots are perceived more positively by people in social contexts and situations. Social robotics systems, therefore, should integrate people’s direct and indirect modes of communication. Moreover, robots should be capable of self-adapting to satisfy people’s needs (i.e. personality, emotions, preferences, habits), and incorporating a reactive and predictive meta-cognition models to reason about the situational context (i.e. its own erroneous behaviours) and provide socially acceptable behaviours.
This special issue is composed by 24 manuscripts. The following collection of papers covers a wide range of topics of interests to identify some of the principal points to explore the role of trust in social robotics to effectively design and develop socially acceptable and trustable robots. The contributions include different aspects of people’s acceptance and trust of robots in different human-centred environments, such as educational, assistive, collaborative scenarios. Some works introduce new notions of acceptance and trust for autonomous artificial agents as tolerance, distrust and by considering interdisciplinary, such as sociology, psychology, and philosophy. Some papers focus on defying the factors affecting people’s trust in robots, such as society’s general attitudes, perceptions, and prejudices, expectations, cognitive and emotional effects. Other contributions, instead, investigate how to recover from a loss of trust, such as after different types of errors, or enhace trust in robots, such as by using personalisation of gaze, navigation, workload and interaction based on individuals’ characteristics (e.g., personality traits). Another group of works propose models for measuring and evaluating trust during a human-robot interaction. Finally, in this special issue a necessary focus has been provided by some authors to moral considerations, ethics, definition of existing and new policies, and the integration of robotics and AI in the EU’s policy plans.
Categories
robots.
Scientific reference
A. Rossi, P. Holthaus, S. Moros, G. Lakatos, A. Andriella, M. Scheunemann and A. Van Maris. Trust, acceptance and social cues in human-robot interaction (SCRITA)(Editorial). International Journal of Social Robotics, 16: 1047-1048, 2024.
Follow us!