In the management of social interactions and activities, humans have been traditionally solely accountable. But robots are being deployed in more complex roles as a result of rapid advances in robotics, and they're becoming an integral part of every aspect of everyday life and professional life. As robots become more sophisticated, the possibilities for human collaboration with robots are increasing and allowing machines to take on certain tasks that free up human resources while continuing to evolve in response to a variety of human needs. The rapid integration of robots into diverse domains, including education (Belpaeme et al., 2018; Mubin et al., 2013), law enforcement (Szocik & Abylkasymova, 2022), healthcare (Joseph et al., 2018; Pepito et al., 2020), and even prison services, as seen in Korean prisons where robots patrol and monitor inmate behavior (Bloss, 2012), underscores the need for in-depth research on human–robot interactions. Humanoid robots are increasingly employed in elderly care for physical exercise (Görer et al., 2017) and as companions for hospitalized children, enhancing their emotional well-being (Shibata et al., 2001). This technological leap, in which robots are placed in previously human-exclusive environments, raises crucial questions about how human behavior and social dynamics evolve in these new contexts.
The increasing ubiquity of robots in daily life leads to varied interactions, from short-term engagements such as aiding in purchasing processes (Donepudi, 2020) and information provision (Okafuji et al., 2022) to complex roles such as teaching kids in preschools (Conti et al., 2020). Robots have been expanding their role and human support to daily assistance (Rincon et al., 2018), neurorehabilitation training (Matarić et al., 2015), education (Xu et al., 2014), advice on lifestyle choices (Powers & Kiesler, 2006; Herse et al., 2018; Rossi et al., 2018; Ogawa et al., 2009), achievement of fitness goals (Kidd, 2008), and energy conservation (Ham & Midden, 2014).
In the future, robots are set to assume an increasing number of authority roles in areas like education, law enforcement and health care. This shift raises a critical question: This shift raises a critical question: how much will society accept machines as figures of authority? Exploring this transition is crucial, particularly in understanding the psychological and social implications of human-robot interactions within these new paradigms. As robots increasingly enter spaces traditionally dominated by human authority, their influence on our decision-making, daily habits, and social interactions warrants careful examination and understanding.
Authority and obedience in human-robot interaction
For a long time, the concept of obedience was primarily considered a component of human-human relations. However, as robotics has progressed, enabling robots to assume roles that include authority, there arises a need to expand this sphere to include human-robot relations. Consequently, various psychological challenges have emerged, including issues concerning trust in robotic authority and psychological resistance to accepting robots in roles traditionally held by humans (Groom & Nass, 2007; Maj, Sawicki, Samson, 2023; Rantanen et al., 2018).
In an experiment by Saunderson and Nejat (2021), the robot could act as a peer to the participant, or as an authority figure controlling the distribution of monetary rewards or penalties based on task performance. The tasks, related to attention and memory, required the robot to persuade the participant to change their initial response. Results indicated that when the robot (NAO) acted as a peer, it was more effective in eliciting obedience to its instructions than when it assumed an authoritative role. This suggests that authoritative robots elicit negative reactions and lead to less willingness to follow their instructions, whereas instructions from non-authoritative robots, not perceived as superior to the participants, are more likely to be accepted and obeyed.
Despite this, it seems that we succumb to robots even when they push us to behave in ways that we find embarrassing. A study by Isabelle M. Menne (2017) specifically focused on this theme, analyzing reactions to commands from robots (NAO) such as “say something really insulting to me” or “imitate an ape with your hands, feet, and sounds”. After executing the commands, participants reported increased feelings of shame, and their reaction times were longer compared to receiving the same commands from humans. It turns out that a robot's physical presence in the interaction significantly increases the likelihood of humans to carry out unusual commands, such as throwing a book into a trash can, compared to when the robot is only shown in a video recording (Bainbridge et al., 2011). A similar embarrassing task was used in research by Schneeberger et al. (2019). They focused on the extent of human obedience to virtual agents compared to human instructors. The participants were instructed by an embodied virtual agent or a human instructor via video chat to complete up to 18 increasingly stressful and embarrassing tasks (including putting a condom on a banana, galloping like a horse, or dancing the chicken dance). The study found that the level of obedience to the virtual agent was equivalent to that of the human instructor, with approximately 45% of participants completing all 18 tasks. Furthermore, the research revealed that the process of performing these embarrassing tasks elicited comparable levels of stress and shame in participants, irrespective of who supervised the performance of the tasks.
We are also inclined to follow robots' directions even when their commands are firm or aggressive. In one experiment by Agrawal and Williams (2018), a PR2 robot was positioned at a building exit, acting as a guard. The robot utilized various verbal and non-verbal cues to convey its instructions, such as arm and torso movements and changes in tone of voice, aimed at emphasizing its authority. The results showed that approximately 60% of the participants in the experiment complied with the robot's instructions, despite no prior awareness of the robot's authority legitimacy. It was noted that participants who adhered to the robot's commands perceived it as less aggressive compared to those who did not comply. The findings suggest that obedience to the robot was more driven by trust in the robot rather than the perception of its aggression or authority.
These diverse studies collectively suggest that human reactions to robotic authority are complex and influenced by multiple factors, such as the robot's physical embodiment and the nature of its role in the interaction. Understanding these dynamics is crucial as we navigate the increasing integration of robots into our social and professional spheres.
Obedience to a robot in the Milgram paradigm
In an experiment conducted in the 1960s by Stanley Milgram, the nature of obedience to authority was explored, demonstrating how individuals can be driven to perform actions against their moral beliefs under authoritative influence (Milgram, 1963). Conducted at Yale University with 40 men from the New Haven area, the experiment simulated a learning study where participants, labeled as “teachers”, were persuaded to administer “electric shocks” to a “learner” (an actor in league with the experimenters) for incorrect answers. Utilizing a fake shock generator with increasing voltages, the participants were encouraged by an authority figure (a professor in a lab coat) to escalate the shocks to dangerous levels. Despite growing internal resistance and moral conflict, many participants complied with the authority's commands, raising profound questions about human behavior under authority. Milgram expanded this research with various modifications, such as changing the setting and informing participants of the “learner's” cardiac issues, but found that obedience levels remained high. This landmark study highlighted the unsettling ease with which individuals could be compelled to act against their moral convictions when influenced by perceived authority.
In subsequent years, Stanley Milgram (see Milgram, 1974; and the review of studies - Doliński and Grzyb, 2017) expanded the scope of his original experiment, introducing various modifications. Recently, Tomasz Grzyb, Konrad Maj, and Dariusz Doliński (2023) replicated the Milgram experiment using a robot. The experiment was faithfully recreated in a control variant where the authority was a human, and in an experimental variant where the human was replaced by a robot. The results revealed no differences in obedience between the experimental conditions regardless of whether a human or a robot served as the authority figure: 90% of the participants were willing to electrically shock the learner, simulating a real test subject, reaching the end of the shock generator’s scale.
It is worth mentioning, however, that classic studies based on the Milgram paradigm, involving experiments with the administration of electric shocks, seem to be characterized by a low level of situational realism (Doliński & Grzyb, 2017). Therefore, researchers have been seeking an alternative experimental scenario based on Milgram's principles – one that would be more relatable to real-life situations, especially in the work environment (Haring et al., 2019; Haring et al., 2021). Taking a very high-level view of the entire experimental procedure, it is worth noting that it essentially involves examining whether people are willing to do various things that they do not want to do, under the influence of a specific authority who, when the subject hesitates, applies verbal pressure of increasing intensity. In this approach, we can also use other types of tasks that may allow us to create different situational contexts, including more ecologically accurate ones, adapted to a given target group or specific sociocultural conditions.
Such a task was created by Haring et al., (2019), who asked subjects to identify hostile targets in synthetic-aperture radar (SAR) images, a challenging exercise due to the low resolution and similarity of targets to non-target objects. They were coached by either a human or one of two types of robots (high or low in human-like appearance), who, similarly to Milgram's experiments, encouraged them to continue practicing the task beyond their initial desire to stop. Participants’ compliance was measured by the duration for which they continued the task after the coach’s prompt and by the total number of images processed. Results showed that participants persisted with the task significantly longer with human coaches, averaging 27.6 minutes, as opposed to 9.7 minutes with high human-like robots and 11.4 minutes with low human-like robots. Correspondingly, the number of images processed was higher in the human-coached condition, with an average of 120.6 images, compared to 31.1 images for high human-like robots and 44.9 images for low human-like robots. These results highlight a distinct preference for human authority in compliance tasks, even when the task involves the repetitive and challenging identification of targets in radar images. The effectiveness of human authority was confirmed by another study (Haring et al., 2021) focused on the comparison of human obedience to commands from humanoid and non-humanoid robots versus human coaches. A clear disparity was observed: in Study 1, civilian participants complied with human coaches for an average of 21.5 minutes, markedly longer than the less than 10 minutes for robot coaches. Study 2, involving military cadets, echoed these findings, with compliance to human coaches lasting about 27.6 minutes, compared to roughly 7.8 to 11.4 minutes for robots. These results underscore a greater readiness to follow human instructions, highlighting the relatively limited authority and impact of robots in similar roles.
Aroyo et al. (2018) came up with another idea for an experiment in the Milgram Paradigm. The experiment focused on human’s willingness to comply with morally challenging requests. Participants interacted with a robot mimicking the appearance of Professor Hiroshi Ishiguro (a well-known professor in Japan), assessing its teaching capabilities. The highly realistic robot issued 14 incrementally morally difficult requests. If a request was initially unmet, it was repeated with increasing insistence. Participants' reactions were categorized as either negative (e.g., silence or refusal) or positive (e.g., agreement or action initiation). The results demonstrated that participants acknowledged the robot's authority and complied with commands, even those they deemed immoral.
An interesting example is the proposal by researchers from the University of Manitoba who developed a "tedious task" - monotonous tasks involving changing file extensions on a computer from "jpg" to "png" (Cormier et al., 2013; Geiskkovitch, Seo, Young, 2015; Geiskkovitch, Cormier, Seo, Young, 2016). This scheme does not involve any teaching process or electric shocks, but there is the pressure of authority compelling the subject to perform an unwanted task. In the first study based on this idea (Cormier et al., 2013), two experimental variants were introduced: one with the participation of a small humanoid robot (NAO) as the experimenter, and a control one, where the experimenter with authority was a human. Both the human and the robot were given the pseudonym "Jim" in the experiment. The task began with an initial set of ten files to change, and as the experiment progressed, each subsequent set consisted of an increasing number of files given to the participant. The sets contained 10, 50, 100, 500, 1000, and 5000 files, respectively, and the study participants were not previously informed about the total number of file sets in the experiment, in order to increase the feeling of monotony. If the participant showed signs of reluctance to continue the task, the robot issued verbal encouragements to continue, modeled after the prods used in Milgram's experiment (1963). The time limit for the experiment was a total of 80 minutes, after which the experiment was terminated. The results showed that the robot was recognized as an authority by 46% of people, while the human in 86% of cases (obedience measured as completing the file extension-changing task within the time limit).
In another study conducted by Geiskkovitch, Seo, and Young (2015), three types of robot experimenters were utilized: a small humanoid robot (NAO), a non-humanoid disc-shaped robot (Roomba), and a robot resembling a computer server capable of emitting sounds and using LED lights. The distinction in the robots' behavior was primarily in their physical embodiment. As with previous experiments, participants were introduced to the robot experimenter and allocated 80 minutes to complete their task, under the remote observation of a researcher. The study's key findings revealed that among 32 participants, 44% obeyed the robot experimenters. When comparing autonomous and remote-controlled conditions, participants in the autonomous scenario exhibited less propensity to protest. While the robot's physical embodiment did not significantly impact the overall obedience levels, the depth of protests varied notably, with the server/machine showing a higher mode of protest intensity, suggesting it was perceived as having greater authority. In a subsequent study by Geiskkovitch, Cormier, Seo, and Young (2016), a condition was introduced whereby a human acted as the authority figure, alongside similar robotic devices as used in the earlier study. Participants displayed a higher degree of obedience toward the human experimenter compared to robotic experimenters: 86% for the human, 46% for the humanoid robot, 38% for the non-humanoid disc-shaped robot, and 50% for the enhanced computer server. Consequently, a considerable percentage of participants also showed obedience to machines, albeit less than 50%. Both studies concluded that while the robots' embodiment and autonomy didn't directly influence obedience, the perceived level of authority played a significant role. This may seem counterintuitive; however, it suggests a possibility that recognizing a robot as an authority could increase participants' expectations of the robot's ethical sensitivity, potentially inducing participants to express dissent and demand higher standards of accountability
These studies collectively reveal that while robots can be perceived as authoritative in human-machine interactions, human authority tends to elicit stronger compliance, especially in monotonous and challenging tasks.