GET THE APP

Insights from Psychology and Philosophy on Addressing Joint Actio

Clinical and Experimental Psychology

Short Communication - (2022) Volume 8, Issue 2

Insights from Psychology and Philosophy on Addressing Joint Action Issues in Hri

Carla Steve*
 
*Correspondence: Carla Steve, Editorial Board office, Clinical and Experimental Psychology, Belgium, Email:

Author info »

Abstract

The development of increasingly proficient robots for participating in joint actions with humans has paralleled the tremendous increase of research in Human-Robot Interactions (HRI) over the last few decades. However, ensuring fluid interactions and maintaining human motivation through the various levels of collaborative activity has proven to be a substantial difficulty. The current study presents several viewpoints derived from psychology and philosophy illustrating the crucial importance of communication in human interactions after analysing current literature on joint action in HRI, leading to a more specific explanation of these problems. We suggest that communicative cues can help with coordination, prediction, and motivation in the context of collaborative action, from individual recognition to the expression of commitment and social expectations.

Keywords

HRI, Joint action, Communication, Mutual recognition, Philosophy, Psychology

Introduction

In the coming decades, human civilizations will see a widespread employment of robotic agents in both public and private social interactions. Social robotics is currently developing and constructing robotic agents for use in a variety of settings, including gaming, education, and treatment. For these advancements to continue, social robotics must create robots that can interact with humans and participate on shared activities that demand high degrees of coordination. This requirement explains the rapid growth of the discipline of human-robot interaction (HRI), which aims to create various methods for enabling robots to interact socially [1, 2, 3]. This social robotics method focuses on equipping robots with devices that are based on human psychological principles that underpin cooperative activities. Theory of mind, emotional recognition, and human-aware navigation are some of the systems that roboticists have attempted to build. Despite these advancements, HRI research reveals that equipping robots with social abilities might sometimes, counterintuitively, degrade user experience and impede human-robot interaction [4]. The novelty effect or the expectation gap between what people believe about the robot – especially when people have not had contact with robots and their expectations are infected by popular culture – and the actual competence of the robots, for example, can be perceived as deceptive human-robot interactions may be undermined by the novelty effect or the expectation gap between what people believe about the robot – especially when people have not had contact with robots and their expectations are infected by popular culture – and the actual competence of the robots Furthermore, some of the robot's traits or behaviours, such as head-nodding, may cause attributions of minds, resulting in a sense of strangeness or unfamiliarity, which might affect humans' trust in the robot. These negative consequences may be exacerbated by the execution of specific social capacities and behaviours, particularly when done alone [5]. Specialization frequently demands researchers to "dissect" particular processes or circumstances, hence focusing on some socio-cognitive mechanisms in isolation usually goes hand in hand with experience in the topic. This appears to be necessary in order to obtain some skill, but it also raises serious concerns about social robotics and its desire to equip robots with the ability to collaborate with people. 1 Is it possible that we're developing expertise at the expense of seeing the "big picture"? Is there something we're missing when we focus on specific socio-cognitive abilities? Is it possible to experiment with alternative tactics to ensure fluency when interacting with the robot? Are there any general approaches to influencing human attitudes during HRI?

HRI and collaborative action: (some) contemporary issues

The concept of collaborative action encompasses a large range of social exchanges and encounters. Joint action, broadly defined, is any type of social interaction in which two or more agents coordinate their actions in order to achieve a common goal. However, in philosophy and psychology, the concept of cooperative action has been debated. Joint actions necessitate that the parties coordinate "their actions in place and time to effect environmental change."

Prerequisites for collaborative action

Leaving aside the argument over the concept of joint action, we want to concentrate on the processes that allow joint actions to be carried out. Coordination, planning, and motivational alignment appear to be three 2 interconnected mechanisms that are required for collaborative action, each of which is supported by other processes. There has been a lot of theoretical and empirical research into these processes, from sharing a common ground to anticipating a partner's actions through emergent coordination [6]. Most importantly, cooperative activities necessitate individuals anchoring their goals in the real world and generating specific coordinated actions. Entrainment or rhythmic synchronisation, perceptionaction matching, perception of joint affordances, emotion understanding, joint attention, rational co-efficiency, or action modelling are some of the methods that can be used to achieve this coordination. Partners can use so-called 'coordination smoothers' to increase coordination by reducing the temporal variability of their activities or adapting their actions even more when they have a simpler sub-task to complete. Intentional coordination, also known as planned coordination, requires the partners to: I represent their own and others' actions, as well as the consequences of these actions, (ii) represent the plan's hierarchy of sub-goals and sub-tasks, (iii) generate predictions of their joint actions, and (iv) monitor progress toward the joint goal in order to possibly compensate or assist others in achieving their contributions. Indeed, planning numerous parts of collaborative action, such as the depiction of the aim and the entire plan, and/or even the sequence of activities to be performed, is common. Different techniques, such as high-level processes like theory of mind, team reasoning, or verbal negotiation, could be used to generate these types of representations [7,8]. Task co-representations, which allow individuals to represent the intricacies of each other's task, are an example of the mechanisms involved in planning a collaborative action. Several studies have shown that people are prone to representing other people's activities, even when doing so is detrimental to their own task performance. Such ability emerges early in development; for example, during a collaborative activity, 5-year-old children can incorporate their partner's contribution into their own action plan due to the establishment of a joint Simon effect. Individuals can use these representations to make predictions about each other's activities, which helps partners alter and coordinate their actions. Individuals can, interestingly, aid the forecasts of others and themselves by communicating important and reliable information for cooperative action. Individuals can use these representations to make predictions about each other's activities, which helps partners alter and coordinate their actions. Individuals can, interestingly, aid the forecasts of others and themselves by communicating important and reliable information for cooperative action. The goal is to make behaviours more transparent and predictable, allowing for more fluid and successful interactive decisionmaking. Sensorimotor communication is an intriguing example of these techniques. Actors may exaggerate their movements or kinematic features to make their acts more understandable to their partners, according to several studies. Aside from such implicit communicative devices, participants in collaborative activity frequently negotiate sub-tasks, subgoals, or strategies to proceed with the collective task on the fly through various explicit exchanges. Finally, communicative mechanisms improve coordination and prediction by providing relevant information about the specific course of action, as well as information that influences the partner's motivational forces, such as motivating the other to stay engaged in the joint task or meeting others' expectations [9].

Attempts and challenges in HRI collaboration

The development of pleasant robots or robots whose look imitates human physical attributes, according to a widely held belief in social robotics, can improve users' experience and therefore engagement. According to this viewpoint, social robotics should attempt to create robots whose appearance and conduct can elicit good emotions in humans (e.g. curiosity or likability). Robotic emotional expressions can be implemented in a variety of methods, all of which have the potential to act as communicative cues. For example, while many works feature anthropomorphic faces with facial expressions, others use posture, bodily movements, or pacing patterns. Eye gazes, gestures, and movement speed can all be used to elicit pro-social attitudes and emotional states [10]. Furthermore, through kinematic signalling, some of the robot's gestures or readable motions can improve joint action prediction. Other research have demonstrated that stereotyped motions, as well as straight lines and extra gestures, are important determinants in readable robot behaviour. In a similar vein, other ways for anticipating motions have been investigated. Communication is unquestionably bidirectional, requiring not only the production of signals and the provision of information, but also the comprehension and interpretation of the signals of others. In this regard, considerable effort has been made to equip robots with the ability to recognise various human social cues, such as gazes, gestures, and facial emotions.

Now, how might these efforts be directed toward creating more efficient social robots capable of cooperating? The aforementioned skills, such as recognising human movements and facial expressions, or producing eye gazes, are geared toward lowering various forms of uncertainty and, as a result, increasing the willingness to communicate by supplying various pieces of information that become common knowledge. Attempts to build a better mutual understanding between humans and robots have not always been effective, despite great advances in the equipment of robotic agents with socio-cognitive capabilities. Recent HRI studies have shown that building social robots with social "ingredients" does not always increase human interaction or experience with the robot.

Obstacles and difficulties in the field of social robots

In terms of human motivation, empirical research examining users' experiences with robots has discovered numerous variables that may impair interactions between human and robotic entities. Humans are not always inclined to interact with robotic agents, according to many HRI studies. The well-known Uncanny Valley Effect is a frequently-mentioned issue. The uncanny valley effect is a psychological phenomenon in which humans feel uneasy or disgusted when they see a computer or item that acts or appears like a person. Other studies, on the other hand, suggest that the effect may be linked to certain nonverbal behaviours or, more broadly, to any cue that can be used to infer the existence of the robot's mind, implying that not only appearance but also the implementation of certain actions and skills in robots may have HRI drawbacks. Negative sentiments and discomfort experienced while observing or interacting with a robot can deter humans from engaging in social interactions with robots. This could be especially crucial in situations when the human-robot relationship must be maintained, such as when robotic companions provide elder care, teaching and childcare, or therapy. This Social Uncanniness may reduce incentive to connect with robots, and while this does not necessarily imply abandoning collaborative action, it can exacerbate negative features that may harm the HRI. Implicit biases and a lack of trust are two other potential negative attitudes toward robots. First, in a series of experiments including implicit association tests, in which reaction times are measured based on associations between a target (a robot or a human) and positive or negative traits.

References

Author Info

Carla Steve*
 
Editorial Board office, Clinical and Experimental Psychology, Belgium
 

Citation: Steve C. Insights from Psychology and Philosophy on Addressing Joint Action Issues in HRI. Clin Exp Psychol, 2022, 8(2), 012-013.

Received: 10-Feb-2022, Manuscript No. CEP-22-54490; Editor assigned: 12-Feb-2022, Pre QC No. CEP-22-54490(PQ); Reviewed: 18-Feb-2022, QC No. CEP-22-54490(Q); Revised: 20-Mar-2022, Manuscript No. CEP-22-54490(R); Published: 28-Feb-2022, DOI: : 10.35248/ 2471-2701.22.8(2).297

Copyright: 2022 Steve C. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.