Next Article in Journal
Comparative Analysis of Generic and Fine-Tuned Large Language Models for Conversational Agent Systems
Previous Article in Journal
Optimized Decentralized Swarm Communication Algorithms for Efficient Task Allocation and Power Consumption in Swarm Robotics
Previous Article in Special Issue
The Town Crier: A Use-Case Design and Implementation for a Socially Assistive Robot in Retirement Homes
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Beyond Explicit Acknowledgment: Brain Response Evidence of Human Skepticism towards Robotic Emotions

Univ. Lille, CNRS, UMR 9193 - SCALab - Sciences Cognitives et Sciences Affectives, F-59000 Lille, France
*
Authors to whom correspondence should be addressed.
These authors contributed equally to this work.
Robotics 2024, 13(5), 67; https://doi.org/10.3390/robotics13050067
Submission received: 11 March 2024 / Revised: 23 April 2024 / Accepted: 24 April 2024 / Published: 28 April 2024
(This article belongs to the Special Issue Social Robots for the Human Well-Being)

Abstract

:
Using the N400 component of event-related brain potentials, a neurophysiological marker associated with processing incongruity, we examined brain responses to sentences spoken by a robot that had no arms or legs. Statements concerning physically impossible actions (e.g., knitting) elicit significant N400 responses, reflecting that participants perceived these statements as incongruent with the robot’s physical condition. However, this effect was attenuated for participants who indicated that the robot could have hidden limbs, indicating that expectations modify the way an agent’s utterances are interpreted. When it came to statements relating to emotional capabilities a distinct pattern was found. Although participants acknowledged that the robot could have emotions, there were significant N400 responses to statements about the robot’s emotional experiences (e.g., feeling happy). This effect was not modified by participants’ beliefs, suggesting a cognitive challenge of accepting robots as capable of experiencing emotions. Our findings thus point to a boundary in human acceptance of artificial social agents: while physical attributes may be negotiable based on expectations, emotional expressions are more difficult to establish as credible. By elucidating the cognitive mechanisms at play, our study informs the design of social robots that are capable of more effective communication to better support social connectivity and human well-being.

1. Introduction

In the growing field of human–robot interaction, the study of Artificial Social Agents (ASAs) represents a frontier where technology meets human social behavior. Exemplified by social robots, ASAs are not just computational tools but are computer-controlled intelligent entities capable of autonomous behavior. They are designed with the ability to recognize human emotions and to synthesize knowledge and experiences, enabling them to engage in meaningful interactions with humans [1]. The present study aims to explore nuances of human reactions to artificial as opposed to biological agents, aiming to uncover the subtle ways our social cognitive mechanisms adapt to this new category of social beings.
The evolution of human social interactions has culminated in a sophisticated set of rules that governs our social conduct [2]. These rules, which are critical for shaping our interactions, apply not only among humans but also to other ’social agents’, including artificial ones [3,4,5]. The successful integration of ASAs into human society, therefore, hinges on the alignment of their design with these social cognitive mechanisms [6,7,8].
At the heart of human social interactions lies an innate need for connection [9], influenced by internal motivations and external cues [10]. The human tendency to anthropomorphize, i.e., attributing human-like qualities to non-human entities, is a testament to these social inclinations [11]. In light of this, it is widely recognized that an ASA’s degree of human-like behavior and appearance increases its acceptance into human social groups [7,12]. Studies have demonstrated that humanoid robots can provoke physiological and emotional reactions, such as increased skin conductance in response to touching a humanoid’s ’intimate’ areas, akin to human–human interactions [13]. Interactions with anthropomorphized entities have also been shown to elicit stronger emotional connections, as seen in augmented joy and sympathy [14]. Moreover, humanoid robots are subjected to higher moral scrutiny compared to inanimate objects [15], and brain response studies suggest that reactions to humanoid robots can mirror those seen in human–human interactions [16,17].
Yet, a closer examination of the role of anthropomorphic traits in the acceptance of a social agent reveals a more intricate picture. [4] demonstrated that humanoid robots equipped with human-like functions—such as gripping objects, but with a two instead of a five-fingered hand, which, if analogous to a human capability, would be deemed less efficient—tend to provoke more hostile reactions from humans than robots endowed with clearly non-human features, like the ability to hover objects. Their results further suggest that the negative response is less about the action’s outcome, which remained consistently effective across scenarios, and more about the suboptimal method of execution according to the human body standard. [18] confirmed that human group dynamics are more accurately mirrored in interactions with anthropomorphic robots. However, she also noted that anthropomorphic robots could be favored over humans when these robots were seen as part of the ingroup and humans were viewed as part of an outgroup. Finally, in the context of aggression, although humans perceive aggression towards both humanoid robots and humans as equally immoral, the victims’ response to such aggression is judged differently: retaliatory actions are deemed moral when performed by humans but not when performed by robots [19]. Collectively, these findings challenge the assumption that a simple correlation exists between the human-like appearance of ASAs and their integration into human social settings. While a human-like appearance and behavior can facilitate acceptance, they can also provoke negative responses. The social categorization of robots can depend on perceived group affiliations, and not just on their anthropomorphic features. Critically, the finding by [19] also hints at a strong anthropocentric viewpoint, where non-human agents are denied the status typically reserved for humans. This distinction, emphasizing attributes traditionally considered exclusively human—such as moral reasoning, consciousness, and emotions—invites reflections on our willingness to acknowledge certain anthropomorphic qualities in non-human agents.
Against this background, our research proposes a nuanced examination of ASA design, positing that specific design choices, particularly in anthropomorphism, may inadvertently alienate human users. Although anthropomorphism can enhance relatability, it is crucial to recognize its limits. For example, programming robots to express emotions —a domain inherently human but outside a robot’s genuine experience— may create dissonance among users, leading to discomfort. Our prediction is that such an overreach into the human domain will elicit a typical human brain response to incongruity. To empirically test this, we investigated whether the discrepancy between a robot’s capabilities and its actual discourse triggers such a response. We measured this reaction through brain electrical activity, specifically observing the N400 component of event-related potentials (ERPs), a neurophysiological marker associated with integrating the meaning of a word into the semantic context established by preceding linguistic and non-linguistic information [20,21,22,23].
The N400 emerges as a negative deflection within the brain’s ERPs, predominantly elicited by semantic or contextual incongruence [24]. Notably, this deflection exhibits a marginally higher amplitude over the right hemisphere compared to the left and is observed with greater prominence at central and parietal electrode locations [21,24]. Typically, the N400 tends to onset around 400ms following the presentation of a critical word that renders an utterance or sentence incongruous with respect to semantic expectations established by the prior context: for instance, in the sentence “He spread the warm bread with socks.” the use of the word ‘socks’, that leads to a semantic incongruence, would likely elicit an enhanced N400 compared to a contextually expected element such as ‘butter’ [21]. The N400 effect is not only associated with language incongruity but also with action incongruity [25,26], and it has also been observed using pictures that were semantically incongruous to a prior object name [27,28]. Refs. [20,29,30] further revealed that semantic integration in language processing extends beyond the sentence level to encompass broader discourse context, showing that listeners use ongoing narrative information to interpret spoken words, thereby impacting the N400 effect regardless of the incongruity’s origin.
In our research, we drew upon the protocol used by van Berkum et al. [30] to investigate whether humans exhibit a typical N400 response to incongruities when listening to a robot expressing experiences beyond its capabilities, namely emotions (e.g., “This morning, I finished an exam first, I was happy.” with the congruent control condition (e.g., “This morning, I finished an exam first, I was quick.”). In order to validate the applicability of the N400 paradigm in such a context, we first examined a more obvious discrepancy: the mismatch between a robot’s discourse and its physical capabilities, i.e., an armless robot discussing knitting (Gigandet et al., 2023). This pilot study tested the paradigm’s boundaries, including the creation of well-balanced linguistic stimuli for both congruent and incongruent discourse versions. In a second step, we then applied the paradigm to explore the subtler discrepancy between a robot’s discourse and the emotional capacities it claims to have.

2. Pilot Study

The social robot Buddy from Blue Frog Robotics, which has no arms or legs (see Figure 1), served as the ASA. In addition to the experimental condition that contrasted congruent and incongruent utterances, a control condition was employed in which participants were exposed only to the robot’s head to ensure that none of the sentences conflicted with the robot’s physical appearance. In the experimental condition (full body) we predicted an effect of sentence congruency on the ERPs, with a higher amplitude of the negative-going N400 component to incongruent sentences. In the control condition (head only) we did not expect any differences between sentences that are incongruent with the robot’s bodily appearance. On top of the ERP measures, participants rated a set of statements probing their impressions of the robot’s perceived capabilities, including two items that probed whether participants considered it plausible that the robot might be concealing hidden arms or legs despite its outward design. We investigated this possibility because robots in popular culture are often displayed and depicted as having unexpected abilities and features. For instance, in the movie Wall-E [31], the character ‘Eve’ exhibits such advanced, hidden features, as does ‘Optimus Prime’ in the Transformers franchise [32]. This particular aspect was designed to explore whether such beliefs could alter the perception of potential incongruencies between the physical characteristics of the robot and its utterances.

2.1. Method

2.1.1. Participants

A total of 56 healthy right-handed French native speakers (37 women, 17 men, 1 nonbinary) between the ages of 18 and 58 (M = 24.25, Mdn = 23) volunteered to participate in the experiment. Thirty-two participants were assigned to the condition in which the robot was presented in its entirety and 24 participants to the condition in which only the robot’s head was visible.
The sample population was free of neurological or psychiatric conditions and was not receiving neuroleptic medication for medical purposes. Individuals with eye conditions such as myopia were allowed to participate and were permitted to keep their glasses on. The Edinburgh handedness inventory [33] was conducted before placing the EEG cap to confirm the participants’ handedness. Participants were paid €15 for their participation.

2.1.2. Materials, Method, and Analyses

Robot Videos #1

The robot Buddy which we have named ‘Lou’ is depicted in Figure 1. A series of 13 videos were recorded, showing the robot speaking (i.e., making mouth movements) for varying durations ranging from 2 s to 8 s, in increments such as 2 s, 2.5 s, 3 s, etc. This was to ensure that the audio of the sentences, recorded separately, could be synchronized with the robot’s mouth movements. The videos were captured using an iPhone 13 Pro mounted on a tripod, against a neutral black background in a room, with the robot placed on a table. They were recorded in 4K resolution at 50 frames per second, enabling the creation of two types of video with just one crop: videos showing the entire robot (BODY condition) and videos focusing solely on the robot’s head (HEAD condition). Figure 1 gives screenshots of videos from each of the two conditions.

EEG Recordings #2

EEG signals were recorded at an initial sampling rate of 2048 Hz using a 64-channel Biosemi ActiveTwo system. A conductive gel was applied to each site to ensure good electrode-scalp contact, with electrode offsets controlled to stay within the range of −20 mV to 20 mV range during the experiment. To track artifacts, three additional electrodes were placed, two close to the mastoids and one below the left eye. The continuous signal was then filtered in the 0.5–30 Hz band and downsampled at 200 Hz. Independent Component Analysis (ICA) using the AMICA [34] algorithm allowed artifacts such as eye blinks, and muscular and cardiac activity to be identified and removed. Finally, the cleaned continuous data were segmented into epochs starting −150 ms before and extending to 1200 ms after the onset of each target word, using the 150 ms pre-stimulus interval as a baseline for ERP analysis. The different data processing steps were carried out using the EEGLAB version 2023.0 [35] and MNE-Python Version 1.4.0 (2023-05-10) [36] toolboxes.

Sentence Material #3

We designed 60 sentences in French, covering a range of simple topics, from simple gestures such as picking up an object or petting a cat to hobbies such as gardening or running (see Appendix A Table A1). For each sentence, we built two alternative endings. The first type of ending comprised a target word that was congruent with the robot’s physical appearance in both the HEAD and BODY conditions (e.g., “I hope one day to go to the very top of the Eiffel Tower. To go up, I’ll take the lift.”). The second type of ending had a target word that was incongruent in the BODY condition, referring to actions that involve arms/hands or legs (e.g., “I hope one day to go to the very top of the Eiffel Tower. To go up, I’ll take the stairs.”). Such incongruence could not occur in the HEAD condition, as participants could not see that the robot lacked extremities.
As outlined in Table 1, the target words in both conditions (e.g., lift/stairs) were controlled for various linguistic properties taken from the database LEXIQUE [37]. It is important to emphasize that cloze probability, which is the likelihood of a sentence concluding with a specific word based on context, exerts an influence on the N400 component [38]. A lower cloze probability signifies a less anticipated word, demanding more effort in semantic integration and thus potentially evoking a more pronounced N400 response. This underscores the need for precise control. To manage this variable, each of the 60 French sentences was evaluated by 25 human raters who were asked to select one of two alternative words to complete the sentence, for instance: “[…] To go up, I’ll take the…” (lift/stairs). Raters participated on a voluntary basis and were recruited locally among students and researchers from the Université de Lille. Items with imbalanced cloze probabilities were substituted and subjected to reevaluation by a separate set of 25 raters. The definitive stimulus lists were determined after five rounds of testing, each involving at least 25 different raters. Due to the inherent challenge of achieving perfect balance for cloze probability in all sentences, some sentences were accepted with a lower bound cloze probability of 0.30. However, when averaged across the entire sentence list, the cloze probability was 0.50. Owing to the constraints imposed by balancing cloze probability, it was not possible to achieve perfect balance for certain other linguistic variables, such as the number of letters, syllables, and phonemes. Consequently, the HEAD condition played a pivotal role in validating our sentence material. We considered that the word stimuli were well balanced if the ERPs in the HEAD condition, which had no sentences conflicting with the robot’s physical appearance, showed no significant differences in the two sentence conditions. A final point that needed attention was the average length of our target words (7–8 letters), which largely exceeded the length typically used in an N400 paradigm (4–5 letters; e.g., [38]). When words are becoming longer, the Phonological Uniqueness Point (UP; [39]) moves further away from the word beginning. The phonological UP refers to the moment in the auditory processing of spoken language where a word can be uniquely identified based on its phonological properties before it is fully pronounced [39]. For instance, the UP for the word “congruence” (kɔˈngɹuʌns) occurs after the sounds making up “cong-” (kɔˈng), because there are no other words in the English language that start with these phonological elements. Hence, listeners do not need to hear an entire word to understand it but can often predict the word partway through. The position of the UP in a word affects the temporal delay of the N400 peak [38]. Since the UPs in our target words occur on average at or after the fifth letter (see Table 1), we expect that the N400 peak occurs at a later time than described in most studies using this paradigm.
Then, the 2 × 60 sentences were divided into two equivalent lists, each comprising 30 sentences with congruent endings and 30 with incongruent endings. The lists were distributed evenly, ensuring that each list was presented to the same number of participants. This approach ensured that each participant was exposed to only one version of each sentence (either with a congruent or incongruent ending). As a result, across all participants each sentence was heard in both versions: the first group received the sentence with a congruent ending, while the second group heard the same sentence with an incongruent ending. This setup maintained balance by ensuring that an equal number of participants heard both the congruent and incongruent versions of each sentence.

Sentence Recording #4

The audio recordings were made with a female speaker, using a microphone (Shure SM58), an audio interface (UMC202HD), a laptop computer (MacBook Pro M1 Max 2021, Apple, Cupertino, CA, USA) and the software Audacity 3.2. The audios were adjusted to remove background noise and long silences.
The use of a recorded human voice rather than a text-to-speech (TTS) voice module was preferred in order to minimize potential variations in the perception of articulation and intonation, which could otherwise influence participants’ responses, particularly the N400 component. In addition, this approach enabled the precise control of the timing and presentation of the verbal stimuli, which is crucial for EEG.

Video Implementation #5

For video editing, audio and video were synchronized using Adobe Premiere Pro, October 2022, Version 23.O and exported in 1080p at 50 frames per second. Each video was structured as follows: the video began with a one-second silence, then the robot uttered the sentence and the video ended with a one-second silence. The video display for participants on the computer screen was created using Psychopy [40].

Estimation of Robot’s Cognitive and Physical Abilities #6

To assess participants’ perceptions of the robot, our study included the use of a translated version of the Ho and MacDorman [41] questionnaire. This choice was motivated by the clear distinction it provides between the dimensions of Humanness, Eeriness and Attractiveness: Each of these three dimensions comprises numerous items, capturing various aspects of human perception and emotional response, enabling in-depth measurement of attitudes towards the robot. We also used five additional statements that were tested by Nazir et al. [4]. The statements focused on the following qualities:
  • Imagination: “Lou can imagine and invent from its experiences”
  • Intelligence: “Lou can adapt to its environment and interact with others”
  • Independence: “Lou is autonomous and does not depend on others”
  • Creativity: “Lou can find original solutions beyond its experiences and create new ones”
  • Talkativeness: “Lou talks a lot and likes to talk a lot”
These five traits were taken from the work of Haslam et al. [42] on essentialist beliefs concerning human personality. Essentialist beliefs refer to the practice of considering a trait to be innate and biologically based and not acquired (see, e.g., [43]). In the work of Haslam et al. [42], personality traits are essentialized if they are regarded as aspects of human nature.
Participants responded to these statements by positioning the computer cursor on a scale ranging from 0 (meaning “Strongly Disagree”) to 100 (“Strongly Agree”). Finally, using the same 0–100 scale, participants were also asked to estimate their beliefs about the robot potentially having concealed arms and legs. The statements were as follows:
  • Hidden arms: “Lou has concealed arms”
  • Hidden legs: “Lou has concealed legs”

Statistical Analyses #7

For the analysis of ERPs, a Mixed Linear Model (MLM) was chosen because of the non-normality of distributions and the heterogeneity of variances, which allows for the accommodation of interindividual variability and ensures robust estimates in the face of these irregularities. Due to the small size of certain sub-samples and for a nuanced assessment of within-subject effects and interactions, repeated measures ANOVA was used as a complementary analysis following the MLM. For the questionnaire, when the data did not follow a normal distribution, the Mann–Whitney test was used. In some cases where the data were normally distributed and had equal variances, confirmed by Levene’s test, a Student’s t-test for independent samples was used.

2.1.3. Procedure

The study was conducted in the EEG laboratory of the EQUIPEX IrDive platform in Tourcoing (France). First, participants were given an information sheet and requested to provide written informed consent, as per the ethical guidelines. Upon providing consent, participants were escorted into the experimental room, the setup of which is depicted in Figure 2b. Participants were presented with a picture of the robot’s head and asked if they were familiar with the robot. Those who already encountered the robot were automatically assigned to the BODY condition to prevent their prior knowledge from influencing the data: this selective assignment reflects the higher number of participants in the BODY condition than in the HEAD condition.
The EEG cap was positioned on participants’ heads, after which they were left alone in the room, receiving instructions to minimize movement and remain as still as possible while the robot spoke, to reduce the risk of EEG signal interference from jaw muscles or eye blinks. Participants initiated the task by pressing a key on the keyboard and proceeded to watch their assigned list of 60 videos, a process that took approximately 12 min. After completing this task, the electrodes were removed. The participants’ perceptions of the robot were then assessed using the questionnaire and the rating of the different statements.
The experiment started with a 20-s instructional segment delivered by the robot (repeating what the experimenter had said previously). Depending on the experimental group, the robot was presented in its entirety or only its head was visible. The instructions were as follows (English translation):
“Hello, my name is Lou. Thank you for participating in this experiment. I will be speaking to you while you wear a cap that measures your brain activity. Starting from each beep sound, please remain as still as possible and avoid blinking. You may blink again when the screen turns black, which happens each time I stop talking.”
During this instruction, a warning tone was emitted when the robot said the word ‘beep’ serving as a practical example for the participants. This warning tone was a 300 Hz sound lasting 300 ms. Following these instructions, a text appeared on the screen, informing the participants that they could start the experiment by pressing the spacebar. Upon pressing the spacebar, the experiment started. The screens were displayed in the following sequence:
  • A 1000 ms black screen, initiated with a 300 ms warning tone
  • A video of the robot uttering the sentence
  • A 2500 ms black screen
The videos with the speaking robot were presented in a different random order for each participant while ensuring that there were no more than three consecutive videos of the same type (incongruent or congruent). At each step of the routine, a trigger was sent to the EEG acquisition system so that the EEG data could be synchronized with the videos to enable subsequent data analysis. Each sequence had its own trigger (1000 ms black screen with warning tone, the video of the robot, 2500 ms black screen), and a different trigger was sent depending on whether the target word was congruent or incongruent. During data analysis, only the target word triggers were used: these triggers enable the EEG data to be broken down into segments—epochs—for each video, so that the EEG data can be analyzed according to the condition (congruent or incongruent). Figure 3 illustrates a trial sequence for the experiment.

2.2. Results

2.2.1. ERPs

Preliminary Analyses #1

Given that the HEAD condition served as a control for the quality of our sentence material, a preliminary analysis centered on this condition. For illustration, the left panel of Figure 4 plots the ERP waveforms at the central electrodes Cz and Pz for the two sentence types (‘congruent’ and ‘incongruent’) in the HEAD condition. No discernible differences were observed between the two sentence types in the data, indicating that they did not differentially affect participants’ ERPs, thus confirming sentences were well-balanced. However, as expected, the peak of the negative-going ERP component in our experiment occurred more than 100 ms later than what is typically observed in standard N400 experiments. To verify whether this delay is indeed related to the late phonological Uniqueness Point (UP) in the target words, we selected from the 120 target words those with a UP at the third or fourth letter in the word. We then contrasted these with target words whose UP occurs at the sixth or seventh letter.
Consistent with our hypothesis, the latency of the peak in the negative-going ERP component was affected by the UP. The later the UP occurred in a word, the more delayed the average latency of the N400 peaks was observed. For words with a UP at the sixth or seventh letter, the average N400 peak latency was 656 ms (amplitude = −3.878 µV), while for words with a UP at the third or fourth letter, it was 573 ms (amplitude = −2.768 µV). A Mann–Whitney test revealed that this difference was significant (U = 36.0, p = 0.0136). Therefore, we can confidently attribute the observed delay to the specific characteristics of our stimuli, rather than to an alternative underlying cognitive mechanism of the ERP component. A similar delay was observed in a study by van Berkum [30], who also used rather long spoken target words.

Main Analyses #2

The right panel in Figure 4 plots the ERP waveforms at electrodes Cz and Pz for the two sentence types in the BODY condition. As evident from the figure, a clear N400 effect is observed, with larger amplitudes for sentences that end with target words that are incongruent with the physical characteristics of the robot. For the statistical test, and based on the preliminary analysis, we opted not to use mean amplitude values (computed for each subject and condition) in the standard N400 latency range of 300–500 ms post-target word onset. Instead, following the approach used by van Berkum et al. [30], we chose a time window of interest extending from 500 to 700 ms, focusing on the specific signals of 13 representative electrodes, illustrated in Figure 2a.
The averaged amplitudes across the 500–700 ms time window at the 13 electrodes are depicted in Figure 5, the left panel plots data for the HEAD condition, and the right panel for the BODY condition. As expected, the BODY condition revealed a clear N400 effect across all electrodes, where sentences concluding with a word incongruent with the physical capacity of the robot elicited a more pronounced negative deflection compared to those that were congruent. In the HEAD condition, such an effect was not seen.
The data were analyzed using a Mixed Linear Model (MLM). The independent variables (fixed effects) were CONGRUENCE (congruent vs. incongruent) and GROUP (body vs. head), including their interaction. Participants were treated as random effects and random variation in the response to CONGRUENCE was modeled for each participant. The MLM approach allowed us to treat participants as random effects to account for baseline variability and individual differences. A significant interaction between CONGRUENCE and GROUP was revealed (Coef. = 0.617, SE = 0.225, z = 2.741, p < 0.01), indicating that the influence of CONGRUENCE on N400 amplitude was contingent upon the group to which participants were assigned. Specifically, this interaction suggests that the difference in N400 amplitude response to incongruent versus congruent stimuli is less pronounced in participants who viewed the robot’s head compared to those who viewed the entire body. The group variance was 0.599 (SE = 0.143), reflecting the variability of responses between participants. This finding implies that participants in the HEAD condition experienced a moderated effect of incongruence, with a relatively smaller increase in N400 amplitude for incongruent stimuli, and highlights how exposure to the whole body or just the head of the robot modulates the N400 component (see Figure 6 for the interaction diagram).

Explicit Ratings of the Robot’s Cognitive and Physical Abilities #3

Figure 7 shows the mean scores for the five statements that explored participants’ perceptions of the robot (Imagination, Intelligence, Creativity, Independence and Talkativeness), along with the corresponding 95% confidence intervals. A Mann–Whitney U test showed that none of the five statements distinguished between participants in the BODY and the HEAD conditions. A two-tailed two-sample t-test also failed to reveal any significant difference between the mean composite scores of the five statements of the two groups, t(54) = 0.836, p = 0.406. These results suggest that participants’ perceptions of the robot, based on the essentialized human personality descriptors, did not differ in the two groups.
Concerning the Ho and MacDorman questionnaire [41], a repeated measure ANOVA with INDICATOR (humanness, attractiveness and eeriness) and GROUP (body vs. head) as the within and between factors, showed a significant effect of INDICATOR (F = 136.1385, p < 0.001, η p 2 = 0.716). However, the effect of GROUP failed to be significant (F = 3.5852, p = 0.0637, η p 2 = 0.0623), and the interaction between the GROUP and the INDICATOR was not significant either (F = 2.0471, p = 0.1340, η p 2 = 0.0365). Table 2 gives the composite scores for Humanness, Attractiveness and Eeriness indices.
Figure 8 presents boxplots comparing perceptions of the dimensions of Humanness, Eeriness and Attractiveness in the BODY (light blue) and HEAD (dark blue) conditions.
Finally, Table 3 provides a summary of how participants perceived the possibility of the robot having hidden limbs, in the HEAD and BODY conditions. The data reveal that despite evidence to the contrary, many participants in the BODY conditions rated this possibility as larger than zero. A reliability test via Cronbach’s alpha, indicated a high internal consistency ( α = 0.827) among participant responses regarding the robot’s hidden limbs, substantiating the reliability of these measures.

2.2.2. The Impact of Participant’s Believes in the BODY Condition

In an exploratory analysis we, therefore, investigated whether participants’ beliefs in the BODY condition could influence brain responses. The belief in the robot’s hidden limbs was anticipated to attenuate the perceived incongruence between its appearance and speech, thus reducing the N400 effect. Furthermore, in such cases, we anticipated that due to the reduced dissonance, the overall perception of the robot would be more positive.

ERPs #1

From the total of 32 participants in the BODY condition, only 12 participants unequivocally asserted that the robot had no hidden arms or legs, giving a rating of 0. In contrast, scores among the remaining 20 varied from 5 to 86, indicating that they envisaged the robot might have hidden limbs.
To assess the potential impact of participants’ beliefs on the ERPs, we contrasted the data from the 12 participants who did not believe in the robot’s hidden limbs (“non-believer”) who scored 0, and the 12 participants who highly thought that the robot could have hidden limbs (“believer”) with ratings from 33 to 86 (M = 59.458, Mdn = 60.5) (Note, that preliminary data for this comparison with 2 × 10 participants were reported in the proceeding of the ARSO 2023 [44]). Figure 9 plots the mean amplitudes within the 500–700 ms interval across the 13 electrodes for both congruent and incongruent sentences. As the figure shows, the group of participants who categorically excluded the possibility of the robot having hidden limbs demonstrated significantly greater N400 amplitude in responses to sentences that were incongruent with the robot’s appearance compared to sentences that were congruent. Conversely, participants who considered the possibility of hidden limbs showed a lower difference in the ERPs amplitudes between the two types of sentences.
A Mixed Linear Model with BELIEFS (believer vs. non-believer) and CONGRUENCE (congruent vs. incongruent) as fixed effects were used to analyze the data. Participants were treated as random effects and random variation in the congruence response was modeled for each participant. The model failed to reveal a significant interaction between the factors CONGRUENCE and BELIEFS (Coef. = 0.608, SE = 0.374, z = 1.625, p = 0.104), probably because of the limited number of participants in each group. The GROUP variance was 0.535 (SE = 0.193). However, when separate repeated measures ANOVAs were carried out for each group, a highly significant main effect of CONGRUENCE was found for the “non-believers” (F(1,11) = 12.192, p = 0.005), while no significant difference was found for “believers” (F(1,11) = 3.478, p = 0.089) (see Figure 10 for interaction diagram). Hence, we cautiously consider the possibility that participants’ beliefs about hidden extremities modulate the strength of the N400 effect to incongruence between a robot’s physical capabilities and its utterances.

Explicit Ratings of the Robot’s Cognitive and Physical Abilities as a Function of Beliefs #2

Figure 11 contrasts scores of the group of “believers” to the “non-believers” for the five statements that explored participants’ perceptions of the robot (Imagination, Intelligence, Creativity, Independence and Talkativeness), along with the corresponding 95% confidence intervals. For all statements higher scores were seen in the “believer” group, confirming our assumption that perceived dissonance between the robot’s capacity and its discourse could trigger a more negative perception of the robot. A two-sample t-test (one-tailed) revealed a significant difference in the mean composite rating scores, t(22) = 1.865, p = 0.037. Participants from the “non-believers” group had generally lower scores (M = 54.766; SD = 19.066) compared to those of the “believers” group (M = 70.216; SD = 21.441). This outcome indicates that a more positive general evaluation of the robot was observed by those who remained open to the possibility of hidden limbs.
Concerning the Ho and MacDorman questionnaire [41], Mann–Whitney U tests did not reveal significant differences between the two groups in any tested dimension suggesting that beliefs about the presence of hidden limbs do not significantly influence the overall perception of the robot on these indicators. Table 4 gives the composite scores for Humanness, Attractiveness and Eeriness Indices.
Figure 12 presents boxplots comparing perceptions of the dimensions of Humanness, Eeriness and Attractiveness in the ‘non-believer’ (light blue) and ‘believer’ (dark blue) groups.

2.3. Discussion

Our pilot study demonstrates that a mismatch between an agent’s physical capacities and its verbal statements can trigger the neurophysiological marker associated with processing semantic incongruity [21]. Specifically, when participants viewed the robot’s entire body, sentences depicting actions impossible for the robot to perform elicited a larger N400 deflection than sentences depicting possible actions. When participants only saw the robot’s head, this phenomenon was neutralized because the conflicting elements (i.e., the missing limbs) were not visible. This absence of effects in the HEAD condition suggests that our ‘congruent’ and ‘incongruent’ sentences were equivalent, despite some unbalanced linguistic variables. Therefore, we can attribute the N400 effect in the BODY condition to perceived incongruency with the speaking agent. Our findings thus reinforce that the N400 reflects the integration of multimodal information, including visual and contextual cues, in the semantic evaluation of discourse (see [20]).
Critically, an exploratory analysis of the data from the BODY condition indicates a modulation of the N400 effect by the participants’ individual beliefs, specifically regarding the robot’s potential for hidden arms and legs. Participants who were considering the possibility of such hidden limbs showed a reduced N400 effect for sentences with incongruent endings, indicating they found these sentences more plausible given their beliefs. Moreover, these beliefs appeared to influence the perception of the robot, as evidenced by ratings using essential human personality descriptors (Imagination, Intelligence, Creativity, Independence and Talkativeness; based on [42]). Participants identified as ’non-believers’ were less likely to attribute human-like qualities to the robot. By contrast, the Ho and MacDorman questionnaire [41] did not distinguish between participants groups. The belief-driven modulation of the N400 suggests that individuals’ mental frameworks affect the processing of multimodal information. Such flexibility underscores the potential of the N400 paradigm to further investigate how humans interpret emotional expressions from robots, given that emotions are often inferred rather than directly observed.

3. Main Study

Our main study aimed to determine how individuals react to a robot’s verbal expressions about emotions, compared to its discussions on non-emotional topics, considering that emotions are beyond the experiential capacity of artificial agents. If a human listener does not attribute emotional experience to a robot, we should observe a characteristic N400 response to the robot’s attempts at discussing emotions. Like in the pilot study, we included a control condition in which the same sentences were spoken by a human. When expressed by a human, there should be no significant difference in the N400 response between emotional and non-emotional content.

3.1. Method

The main study contrasts videos featuring a robot (ROBOT condition) with those featuring a human speaker (HUMAN condition) delivering emotionally laden and emotionally neutral sentences (see Figure 13). Note, that the depiction of the robot is similar to that of the human speaker, and does not show that the robot has no extremities. This neutralized the influence of this factor on participants’ perception. Unless otherwise specified, the main study adhered to the same conditions as the pilot study in terms of materials, methods, analyses, and procedures.

3.1.1. Participants

The population comprised 50 healthy native French speakers (42 women, 6 men, 2 nonbinary), aged between 18 and 58 years (M = 23.16, Mdn = 21), recruited under criteria consistent with the Pilot Study. None of the participants had participated in the Pilot Study.

3.1.2. Materials

Agents Videos and Sentence Material #1

Although a different human voice from the Pilot Study was used to dub the videos, it was relatively close in timbre, rhythm, and intonation. Specifically, this voice was that of the human actress, used in both the Human and the Robot conditions. Having the same audio in the two conditions allowed us to isolate the effect of agent appearance (robotic vs. human) on perceived emotion attribution, providing a robust basis for evaluating the impact of the agent type without the influence of vocal variation. The narratives in the videos were designed to refer to a wide range of situations and emotional reactions, reflecting the experiences and reactions of everyday life (see Appendix A Table A2). They covered topics such as unexpected challenges, social interactions, and personal accomplishments (e.g., “Hier on m’a invité à participer aux tâches ménagères, j’étais ravie.”, approximate English translation “Yesterday I was invited to help with the housework, I was delighted.”). These sentences were contrasted to an emotionally neutral counterpart (e.g., “Hier on m’a invité à participer aux tâches ménagères, j’étais opérationnelle.”, approximate English translation “Yesterday I was invited to help with the housework, I was ready-to-go.”). Sentences with emotional content included 6 distinct emotions with 10 sentences dedicated to each (happiness, fear, anger, sadness, disgust and surprise).
Target word characteristics are given in Table 5. Note, that the cloze probability for the two-sentence condition was significantly different. However, given the small total difference (0.49 versus 0.51) we did not expect substantial discrepancies between the two conditions due to this variable. Like in the pilot study, the HUMAN condition served to validate our sentence material.

Estimation of Agent’s Cognitive Abilities #2

As in the Pilot Study, we used the five items of Nazir et al. [4] as well as the Ho and MacDorman questionnaire [41] in our Main Study for both agents (ROBOT and HUMAN): the aim was to have a comprehensive experimental plan that examines human-likeness in both artificial and human agents. We also aimed to establish a baseline of human likeness against which artificial agents could be compared. Note, that a direct application of the Ho and MacDorman questionnaire to evaluate humans may not be entirely appropriate, as the scale is tailored to responses elicited when facing non-human entities. The results for humans should, therefore, be interpreted with caution. Participants beliefs about the robot potentially having emotional capacities (express and feel emotions) were probed by the two statements:
  • Express emotions: Lou can express emotions
  • Feel emotions: Lou can feel emotions

3.2. Results

3.2.1. ERPs

Figure 14 plots the ERP waveforms for the congruent and incongruent sentences at electrodes Cz and Pz for the two sentence types in the HUMAN condition (left) and the ROBOT condition (right).
Figure 15 plots the averaged ERP amplitudes for the congruent and incongruent sentences over the 500–700 ms time window for the 13 electrodes of interest.
The results unexpectedly showed a slight difference between ‘congruent’ and ‘incongruent’ sentences in the HUMAN condition (control condition), which should be attributed to some unbalanced linguistic variables in our sentence material. Note, though that the ’incongruent’ sentences produced less negative values than the ’congruent’ sentences, an outcome that is the reverse of what would be expected from a typical congruency effect. Therefore, this pattern should not undermine the interpretation of our findings in the ROBOT condition, which shows a distinct N400 effect at all electrodes: Sentences that ended with a word incongruent with the robot’s capacity to experience emotions triggered a more pronounced negative deflection than those that were congruent, i.e., the emotionally neutral counterparts. The Mixed Linear Model (MLM) used the independent variables (fixed effects) CONGRUENCE (congruent vs. incongruent) and GROUP (robot vs. human), and their interaction. Participants were treated as random effects, and in addition, random variation in response to congruence was modeled for each participant. The model revealed a significant interaction between CONGRUENCE and GROUP (Coef. = −0.771, SE = 0.309, z = −2.491, p = 0.013), illustrating that the effect of congruence on the N400 amplitude was modulated by the type of agent (human or robot) that pronounced the sentences (see Figure 16). The group variance was 0.999 (SE = 0.223), reflecting the variability in the responses between participants.

3.2.2. Explicit Ratings of Agents’ Abilities

Figure 17 shows the mean scores for the five statements that explored participants’ perceptions of the agents (robot and human), along with the corresponding 95% confidence intervals.
A Mann–Whitney U test did not detect a significant difference between the mean composite scores of the two groups (U = 246.5, p = 0.203). These results suggest that participants’ perceptions of these aspects do not vary significantly according to the group to which they belong.
Concerning the Ho and MacDorman questionnaire [41], a repeated measure ANOVA with INDICATOR (humanness, attractiveness and eeriness) and GROUP (robot vs. human) as the within and between factors, showed a significant effect of GROUP (F = 20.041, p < 0.001, η p 2 = 0.294), of INDICATOR (F = 21.083, p < 0.001, η p 2 = 0.305), and a significant interaction between the two factors (F = 17.767, p < 0.001, η p 2 = 0.270). A Mann–Whitney U test shows that a significant difference was found for the indicator humanness (U = 70.0, p < 0.001), but no significant difference was found for the indicators eeriness (U = 351.5, p = 0.455) and attractiveness (U = 315.5, p = 0.961). Table 6 gives the corresponding composite scores.
Hence, unsurprisingly, participants viewed the robot as more artificial than the human. Although participants clearly perceived the robot as more artificial and perceived it as discussing things beyond its capabilities (N400), their perception does not translate into a feeling of eeriness or negatively affect the attractiveness (i.e., provoke discomfort or diminish aesthetic appeal). Figure 18 presents the boxplots comparing perceptions of the dimensions of Humanness, Eeriness and Attractiveness from the Ho and MacDorman questionnaire [41] in the ROBOT (light blue) and HUMAN (dark blue) groups.
Table 7 summarizes the ratings for participants’ beliefs with respect to the agent’s ability to express or feel emotions in the HUMAN and ROBOT conditions. On average, scores for the agent’s ability to express emotions were slightly higher for the ROBOT than for the HUMAN condition, though this difference was not statistically significant (U = 338, p = 0.617). By contrast, scores for the ability to feel emotions were lower in the ROBOT condition than in the HUMAN condition (U = 172, p < 0.001). As indicated by the median, half of the participants assigned a score of 43 out of 100 to the possibility that the robot could feel emotions. In fact, of the 25 participants in the ROBOT condition, only three unequivocally asserted that the robot cannot feel an emotion and gave a score of 0.

3.2.3. The Impact of Participants’ Beliefs in the ROBOT Condition

ERPs #1

To assess the potential impact of the participants’ beliefs in the ROBOT condition on the N400 component (n = 25, Mdn = 43, min = 0, max = 100), the ideal approach would have been to mirror the Pilot Study by contrasting participants with scores equal to zero against those with maximum belief ratings (’non-believer’ vs. ‘believer’ groups). However, this approach was limited by the insufficient number of participants with a score of 0 (n = 3) and by the necessity to exclude three participants whose scores were too close to the median. As a consequence, we used a similar logic but with a wider range of scores to form the two subgroups. We divided the remaining 22 participants into two groups: one group included the 11 participants with the lowest ratings (min = 0, max = 38, M = 18.272, Mdn = 18), while the other included the 11 participants with the highest belief scores in the robot’s potential capacity to feel emotions (min = 71, max = 100, M = 88.636, Mdn = 86). Figure 19 presents the averaged N400 amplitudes within the 500–700 ms interval across the 13 electrodes for both congruent and incongruent sentences. No obvious differences in the N400 effect are seen between the two sets of data.
The Mixed Linear Model (MLM) revealed no significant interaction between the two factors (Coef. = 0.058, SE = 0.565, z = 0.102, p = 0.918). The variance attributed to the GROUP was 1.758 (SE = 0.548). Figure 20 plots the Interaction diagram, showing very similar effects of CONGRUENCE in the two groups. A repeated measures ANOVA confirmed that the interaction between CONGRUENCE and GROUP was not significant F(1,20) = 0.010, p = 0.919, η p 2 = 0.000523, indicating that the only significant effect was due to GROUP, F(1,20) = 4.344, p = 0.050, η p 2 = 0.178.

Explicit Ratings of the Robot’s Abilities #2

Figure 21 contrasts subgroups’ scores for the five statements that explored participants’ perceptions of the robot, along with the corresponding 95% confidence intervals. Except for ‘Talkativeness’ an overall tendency for a less positive perception of the robot is seen in the ‘non-believer’ group.
For the five statements the composite score was 51.454 (SD = 17.746) for ‘non-believers’ and 60.654 (SD = 11.88) for ‘believers’. A Mann–Whitney U test did not reveal any significant difference between the mean composite scores of the two subgroups (U = 46.5, p = 0.375). However, exploratory analysis showed that when ‘Talkativeness’ was set aside, the composite score of the remaining four qualities distinguished between the two groups with a more positive view of the robot by ‘believers’ (U = 28.5, p = 0.038).
Concerning the items of the Ho and MacDorman questionnaire [41], a repeated measure ANOVA with INDICATOR (humanness, attractiveness and eeriness) and GROUP (non-believer vs. believer) as the within and between factors, showed a significant effect of INDICATOR (F(2, 40) = 23.098, p < 0.001, η p 2 = 0.536), but no significant effect of the GROUP (F(1, 20) = 5.382, p = 0.031, η p 2 = 0.212), and no significant interaction effect, F(2, 40) = 2.415, p = 0.102, η p 2 = 0.108. Table 8 gives the composite scores for the composites indices.
Figure 22 presents the boxplots comparing perceptions of the dimensions of Humanness, Eeriness and Attractiveness in the ’non-believer’ (light blue) and ’believer’ (dark blue) groups.

3.3. Discussion

Our main study expanded upon our pilot study’s findings by examining participants’ reactions to sentences about emotions—attributes not directly observable and typically attributed to biological agents—when articulated by a robot versus a human. The results demonstrate that the N400 paradigm is an effective tool for uncovering the cognitive processes triggered when an agent’s verbal expressions conflict with inferred, non-observable capacities: In line with our hypothesis, the N400 effect seen in the robotic condition supports the notion that a robot discussing its emotions is perceived as incongruent. Although a robot can be programmed to mimic emotional expressions, discerning these expressions as genuinely experienced by the ASA seems to present a cognitive challenge, at least for a predominantly female sample (ROBOT condition: 20 women, 4 men, 1 nonbinary). In contrast, the absence of a congruity effect in the human (control) condition highlights the alignment with our internal models of human emotional experience. These findings align with previous studies that found, for instance, that a sentence like “Every evening I drink some wine before I go to sleep” elicits a greater N400 effect when uttered by a young child compared to an adult [45,46]. Altogether, these findings emphasize the role of expectations and world knowledge in processing speech (see [20]) and provide insights into how we assess communication from various agents. Critically, while the pilot study demonstrated that participants’ beliefs about the robot’s potential hidden physical capabilities (e.g., concealed limbs) could influence brain responses, a distinct pattern was observed concerning emotional capabilities. Despite the fact that a large majority of participants (22 out of 25) believed that the robot could potentially experience emotions, a clear N400 effect—indicative of perceived incongruity—was observed when the robot discussed its emotions. The targeted comparison of subgroups with the lowest and highest rating scores showed no differences in the magnitude of the N400 effect. Therefore, even though robots can be programmed to simulate emotional expressions, the ability to perceive these expressions as genuinely felt by the ASAs appears to pose a cognitive challenge. Finally, like in the pilot study, participants’ beliefs seemed to impact their perception of the robot, as estimated with the essentialized human personality descriptors (except for ’talkativeness’), with lower scores given by participants who were skeptical about the robot’s capacity to feel emotions. Except for the Humanness indicators, the Ho and MacDorman questionnaire [41] did not distinguish between participant groups.

4. General Discussion

The present research offers insights into the cognitive processing of incongruities in human–robot interaction, using the N400 component. Specifically, our findings substantiate the role of the N400 in detecting incongruence and extend its applicability to the domain of social robotics. The data reveals a significant increase in N400 brain wave responses when participants were presented with a robot uttering statements that conflicted with its observable physical or inferred emotional capabilities. In the pilot study, descriptions of physically impossible actions for the robot elicited stronger N400 responses, indicating that participants perceived these statements as incongruent with the robot’s physical condition. However, this effect was attenuated by participants’ beliefs about the robot’s potential hidden extremities, showing that expectations can shape interpretations of an agent’s utterances. The main study extended these observations to verbal expressions related to the robot’s emotions, revealing similar N400 responses to those observed for physical incongruities. Critically, unlike with physical capabilities, participants’ beliefs about the emotional capacity of the robot did not modulate this incongruence effect, underscoring a skepticism towards the robot’s emotional abilities that extends beyond participants’ explicit acknowledgment.
Our findings reveal a deep cognitive engagement with the congruence of what an agent can do versus what it claims to do, expanding on previous research about the context sensitivity of the N400 effect during language processing [20,29,30,46]. For example, Hagoort et al. [45] showed that given the well-known fact among Dutch people that Dutch trains are yellow, a sentence like “The Dutch trains are white and very crowded” elicits the N400 effect. In our study, the mismatch eliciting this response derives from mental representations that participants have of the speaker. The sensitivity of the N400 component to such world knowledge offers a valuable means for gaining insights into how humans perceive ASAs: Recall that in the pilot study, only about one-third of participants unequivocally asserted that the robot had no hidden limbs. The majority, therefore, entertained the possibility of the robot possessing hidden limbs, contrary to visible evidence. Inspired by the character Eve from the movie Wall-E [31], who can unfold hidden extremities when necessary, we have termed this phenomenon the “Eve effect bias” [44]. The “Eve effect bias” refers to the tendency of humans to ascribe physical capabilities to robots without supportive evidence. This bias, which was shown to mitigate the experienced cognitive dissonance during the processing of the sentences, reveals a clear leniency in humans’ acceptance of information from ASAs, showing a readiness to adjust expectations for yet unseen technological capabilities. By contrast, when the robot refers to its emotions—a realm deeply associated with biological entities—this adjustment does not occur, despite participants’ explicit acknowledgment of the robot’s potential for emotion. The resulting N400 responses imply that genuine emotional experiences are not seen as congruent with our mental models of artificial entities. This demarcation, which resonates with findings that robots are typically perceived as out-group members by humans (e.g., [47]), may point to a form of ‘synthetic otherism’ that highlights the distinctiveness of human nature. Out-group members are frequently attributed fewer uniquely human characteristics [48,49,50]. In line with the discussion in the introduction (e.g., [4,19]), ‘synthetic otherism’ exposes the challenges of anthropomorphizing technology. While the N400-related incongruence may not directly lead to discomfort, it likely signifies a subconscious awareness of the robot’s lack of authenticity, potentially impacting the acceptance of ASAs.
Since the seminal work by Breazeal [6] on emotion and sociable humanoid robots, the field of social robotics and emotion has seen significant growth. A recent systematic review by Stock-Homburg [51] identified over 1600 articles from the past two decades, focusing on the role of emotions in human–robot interactions. This work revealed four main streams of research on “(1) emotional expressions by robots, (2) the human recognition of artificial robotic emotions, (3) human responses to robotic emotions, and (4) contingency factors”. Regarding the third point, it is important to note that Stock-Homburg highlighted the reliance of all reviewed studies on self-ratings, similar to the methods we employed here alongside brain response measures. The discrepancy that we observed between conscious explicit ratings and the N400 responses underscores the need for cautious interpretation of findings based on self-reports only—a point also emphasized by Stock-Homburg. In a recent study by Spatola and Wudarczyk [52], an adapted Implicit Association Test (IAT) was used to probe automatic associations between robots and emotional states. The IAT measures implicit links between concepts (e.g., robot, human) and evaluations (e.g., positive, negative) or stereotypes (e.g., smart, stupid), typically through simple measures such as response times. Using reaction time measures, the researchers demonstrated that implicit attribution of primary versus secondary emotions toward robots was related to a more anthropomorphic warmth perception and reduced discomfort toward robots. Hence, implicit measures extend beyond neurophysiological methods like EEG and include behavioral experiments. Through tasks that are designed to observe spontaneous reactions, response times, or choice preferences, researchers can gain insights into the implicit attitudes and perceptions that individuals hold towards robots.
In conclusion, transparent communication about a robot’s capabilities and the intended purpose of its emotional expressions may help manage user expectations and mitigate potential dissonance. Particular attention should be paid to the congruence between a robot’s appearance and its real capabilities (see also [53]). In general, there is a need to understand and take into account the diverse expectations and beliefs of the human social partner. Future work should explore how these beliefs are formed and how they can be influenced. If ’synthetic otherism’ is an insurmountable barrier, there should be a strategic pivot in the design of social robots. As highlighted by Breazeal et al. [6], emotional expressions serve as signals with the aim of influencing others’ behaviors (see [54]). Designs might, therefore, benefit from prioritizing contextually relevant cues to enhance interaction over attempting to mimic human-like expressions. Considering roles for robots that complement human capabilities, exploiting their unique strengths, could improve acceptance and facilitate human–robot interaction (see [4]). In short, embracing robots’ synthetic nature to introduce novel interaction forms, possible only because the robot is non-human, could be a valuable path forward. Bridging the gap between human expectations and robot capabilities could inform the design of ASAs to contribute meaningfully to the well-being of their human partners.

5. Limitations

While our study contributes valuable insights into human perception of emotion in ASAs, it is important to recognize its limitations. First, the generalizability of our results may be constrained by the specific experimental conditions, including the type of robot, the nature of the tasks, and the characteristics of the participant sample. Our work is based on videos of only one single robot, i.e., ‘Buddy’, and does not test direct interactions, which could influence the participants’ reactions. The cute appearance and small size of ‘Buddy’ could affect participants’ perceptions of its emotional capabilities (though this should rather enhance emotional perception). Direct interaction with the robot (and other types of robots) could generate different responses because of the physical presence and real-time interactive dynamics. Moreover, the sample of participants is largely made up of women (37 of the 56 participants in the Pilot Study and 42 of the 50 participants in the Main Study were women), which might bias the findings because of gender-related differences in empathy and emotional response (e.g., [55]). Participants’ prior experiences with and expectations of robots, which were not properly controlled in the study, could influence their perceptions and responses. Nomura et al. [56], for instance, showed a correlation between personal experiences and negative attitudes towards robots, with prior interactions leading to more positive perceptions. Furthermore, testing solely in Lille/France limits the ability to generalize our findings across different cultural contexts. Intercultural studies [57,58,59] have highlighted the impact of cultural contexts, personal attitudes and habits on attitudes towards ASAs. Bartneck et al. [59] identified varying attitudes across Japanese, Chinese, and Dutch cultures. They revealed Japanese participants’ concerns about the social impact of robots, challenging the notion of a uniformly high acceptance of robots in Asia. These findings underscore the need for culturally and personally aware robot design for social acceptance. Lastly, our focus on the robot’s verbal emotional expressions, which inherently attribute feelings to the speaker through the use of first-person narrative, rather than incorporating non-verbal cues such as facial expressions and gestures, may limit our comprehensive understanding of how humans perceive the authenticity of robotic emotions.

Author Contributions

Conceptualization, R.G., K.O. and T.A.N.; methodology, R.G., K.O. and T.A.N.; software, R.G.; validation, R.G., K.O., T.A.N. and M.C.D.; formal analysis, R.G. and T.A.N.; investigation, R.G.; resources, R.G.; data curation, R.G., K.O. and M.C.D.; writing—original draft preparation, R.G. and T.A.N.; writing—review and editing, R.G., K.O., M.C.D. and T.A.N.; visualization, R.G.; supervision, R.G. and T.A.N.; project administration, R.G. and T.A.N.; funding acquisition, T.A.N. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Métropole Européenne de Lille (MEL), the I-SITE Université Lille Nord-Europe (ULNE) with a grant awarded number R-Talent-20-006-Nazir and a PhD scholarship from the Université de Lille.

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki, and approved by the Ethics Committee of the Université de Lille (protocol code 2022-659-S112, 1 June 2023 and 29 September 2023).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study. Participants received an information sheet detailing the aims of the research, the methods employed, and their rights, including their right to confidentiality and anonymity. They were then asked to give their informed consent in writing before taking part in the study, in accordance with ethical guidelines and the French law “Loi n°78-17 du 6 janvier 1978 relative à l’informatique, aux fichiers et aux libertés”. The Personal Data and Archives Department of the Université de Lille also approved the collection and use of the data collected, ensuring that our study complied with national regulations on the protection of personal data.

Data Availability Statement

The data presented in this study are available on request from the corresponding author due to restrictions arising from the ethical and privacy considerations mandated by the Ethics Committee and The Personal Data and Archives Department of the Université de Lille.

Acknowledgments

The authors declare that GPT-4 was used for correction and linguistic assistance in accordance with MDPI’s publication guidelines for artificial intelligence tools. The authors wish to thank the Fédération de Recherche Sciences et Cultures du Visuel (FR CNRS 2052 SCV) for their material support, which has been crucial in the realization of this project. Special thanks to Melisa Yavuz for her invaluable assistance with electrode setup and data collection, contributions that significantly aided our study.

Conflicts of Interest

The authors declare no conflicts of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

Appendix A

Table A1. List of sentences in the Pilot Study.
Table A1. List of sentences in the Pilot Study.
SentenceWordCongruency
Pour le plaisir, j’aime avoir des hobbies et des centres d’intérêts variés. Récemment, j’ai commencé à apprendre le …chantCongruent
pianoIncongruent
Au Japon, la cérémonie du thé est très importante. Un de mes rêves est d’y …assisterCongruent
goûterIncongruent
Il est très enrichissant d’échanger lorsqu’on n’est pas d’accord avec quelqu’un. Lors d’un débat, il est important pour moi d’appuyer mes arguments avec des …intonationsCongruent
gestesIncongruent
Lors des fêtes de fin d’année, ce qui me fait le plus plaisir c’est d’être avec les gens que j’aime et …parlerCongruent
mangerIncongruent
J’aime me rendre utile. Par exemple, si quelqu’un veut un produit dans un magasin, je peux généralement l’aider à le trouver en lui …indiquantCongruent
attrapantIncongruent
J’adore me perdre sur Youtube. Parfois, je tombe sur des clips de danse que je …regardeCongruent
reproduisIncongruent
Hier, j’ai assisté à un cours de yoga. J’ai beaucoup aimé, c’était très enrichissant de pouvoir apprendre de nouveaux …mantrasCongruent
asanasIncongruent
Hier j’ai vu un film qui m’a fait pleurer. Avant de continuer la journée, j’ai pris le temps de me …calmerCongruent
moucherIncongruent
Lors des vide-greniers, les gens revendent souvent des affaires qu’ils n’utilisent plus à des prix qui varient. Avant d’acheter, je dois souvent …négocierCongruent
fouillerIncongruent
Hier, j’ai assisté à un cours de basket. Une fois que c’était terminé, je me suis …reposéCongruent
étiréIncongruent
La dernière fois, je me suis retrouvée dans une situation très dangereuse. Ma cuisine a pris feu. Les pompiers sont arrivés, c’est moi qui leur ai donné les …directionsCongruent
extincteurIncongruent
On trouve parfois des choses étonnantes sur le sol. Quand je vois un portefeuille par terre, je le …signaleCongruent
ramasseIncongruent
Il y avait un paquet de mouchoirs par terre. Comme il était sur ma trajectoire, je l’ai …évitéCongruent
ramasséIncongruent
Il y a quelques jours, j’ai repensé à une amie qui habite dans un autre pays. J’ai voulu lui donner des nouvelles et je lui ai donc envoyé un message …vocalCongruent
écritIncongruent
Après une longue journée, j’aime bien décompresser et recharger mes batteries en …méditantCongruent
courantIncongruent
J’ai tendance à être triste quand je vois des gens que j’apprécie être malheureux. Si je le peux, je vais voir la personne et je lui fais un …complimentCongruent
câlinIncongruent
Lorsque quelqu’un est perdu et hésite entre deux chemins, je fais comme je peux pour le diriger vers la bonne route en la lui …expliquantCongruent
montrantIncongruent
Certains matins, sans trop savoir pourquoi, je ne suis pas de bonne humeur. Quand je suis grognon, je ne le garde pas pour moi et je l’exprime avec un …discoursCongruent
gesteIncongruent
J’aime bien avoir une routine minutée le matin. Je suis toujours rapide pour …réveillerCongruent
doucherIncongruent
J’ai déjà été dans un camp de vacances. J’ai participé à un défi où je devais lire un texte, puis le …répéterCongruent
mimerIncongruent
Quand je suis en couple, j’aime beaucoup faire plaisir en faisant des …complimentsCongruent
massagesIncongruent
La première fois que j’ai rencontré le Président, je lui ai dit bonjour en lui …souriantCongruent
serrant la mainIncongruent
Pour être en bonne santé, il est important d’être bien …entouréCongruent
hydratéIncongruent
Je suis un grand fan de comédies musicales, notamment West Side Story. Je connais toutes les …parolesCongruent
chorégraphiesIncongruent
J’adore faire rire les enfants de mes amis. Pour y arriver, je peux les …taquinerCongruent
chatouillerIncongruent
Je suis allé voir un ami qui fait du théâtre. J’ai beaucoup aimé sa pièce, je l’ai …félicitéCongruent
applaudiIncongruent
J’adore les animaux. Là où je vis, il y a un chat doux et magnifique. Souvent, je …admireCongruent
étreintIncongruent
Un des grands malheurs de ce monde est les enfants qui sont hospitalisés. Je vais parfois les voir avec des …histoiresCongruent
jouetsIncongruent
J’adore garder la fille de ma voisine, elle est très facile à vivre. La dernière fois, j’ai mis un point d’honneur à la …divertirCongruent
coifferIncongruent
Récemment, j’attendais un ami devant la porte de chez lui. Comme il mettait du temps à sortir, j’ai décidé de l’…appelerCongruent
toquerIncongruent
Hier, un monsieur a jeté son mégot dans la queue. Comme je n’aime pas la pollution, je l’ai …signaléCongruent
jetéIncongruent
Je connais mon tempérament quand je me mets en colère, alors je prends toujours du temps pour éviter les …insultesCongruent
bagarreIncongruent
Récemment, j’ai eu la chance d’aller en Corse. Là-bas, j’ai pu me reposer et …visiterCongruent
nagerIncongruent
J’aimerais beaucoup m’habiller comme les Lillois. Vos vêtements sont variés et jolis. J’ai très envie de porter des …foulardsCongruent
gantsIncongruent
J’espère aller un jour tout en haut de la Tour Eiffel. Pour monter, j’emprunterai l’…ascenseurCongruent
escalierIncongruent
Je regarde beaucoup de films. Mes films préférés sont les films d’action. Moi aussi, j’aimerais sauver le monde grâce à mes capacités …intellectuellesCongruent
physiquesIncongruent
J’aime prendre part à des activités variées et faire des choses différentes selon les jours. Par exemple, j’adore …chanterCongruent
jardinerIncongruent
Parfois le weekend je passe une après-midi au parc. Quand je suis là-bas, une de mes habitudes préférées est d’observer les oiseaux et de les …nommerCongruent
nourrirIncongruent
Je suis très fort en mathématiques. Je peux facilement effectuer un calcul compliqué et donner le résultat en l’…énonçantCongruent
écrivantIncongruent
En décembre, nous avons acheté un sapin en préparation des fêtes de fin d’année. Il était grand, avec ses nombreuses épines de pin. J’ai adoré pouvoir le …contemplerCongruent
sentirIncongruent
L’hiver il fait très froid ici à Lille. Heureusement, nous avons un système pratique qui me permet d’augmenter le chauffage grâce à une commande …vocaleCongruent
manuelleIncongruent
Quand je dois apprendre le contenu d’un texte, je vais m’appliquer à le …retenirCongruent
surlignerIncongruent
Pendant la pandémie, nous avons appris à appliquer des gestes barrière. J’essaie de les respecter et de bien me …distancerCongruent
désinfecterIncongruent
J’aime beaucoup aller à des événements, mais je déteste faire la queue qui m’oblige à …attendreCongruent
piétinerIncongruent
La semaine dernière, j’ai revu quelqu’un que je n’avais pas vu depuis un petit moment. Nous avons discuté et je lui ai fait une …plaisanterieCongruent
biseIncongruent
J’aime beaucoup le sport. Je suis un grand fan de volley particulièrement. La prochaine fois qu’un match aura lieu, j’essaierai de le …visionnerCongruent
jouerIncongruent
Quand je suis arrivé dans la chambre d’hôtel, le lit n’était pas fait. Cela n’était pas professionnel, j’ai dû le faire …remarquerCongruent
moi-mêmeIncongruent
La lampe du salon ne marche plus. Je crois que c’est l’ampoule qui ne fonctionne plus. Je vais en parler à ma propriétaire et la …prévenirCongruent
dévisserIncongruent
Parfois, je passe de longs moments à regarder par la fenêtre. Cela me donne envie de …rêvasserCongruent
gambaderIncongruent
L’année dernière, j’ai été dans un train pour faire le trajet de Lille à Paris. J’étais tellement excité d’arriver. J’ai passé tout le trajet à …bavarderCongruent
trépignerIncongruent
Parfois, je me pose la question de ce que je ferais comme métier si je pouvais choisir tout ce que je voulais. Je crois que je serais …traducteurCongruent
cordonnierIncongruent
J’aime bien faire des actions pour prendre soin de moi. C’est important pour se sentir bien et pouvoir prendre soin des autres. Ce que je préfère, c’est aller chez le …psychologueCongruent
coiffeurIncongruent
Le moyen que je préfère utiliser pour intégrer un concept, c’est de l’…enseignerCongruent
écrireIncongruent
L’accessoire que je préfère porter sont les …chapeauxCongruent
baguesIncongruent
Fin octobre dernier, nous avons fêté Halloween. A un moment, quelqu’un est sorti d’un recoin pour me faire peur. J’ai …criéCongruent
sursautéIncongruent
Quand c’est le weekend, j’aime bien voir des amis et …socialiserCongruent
danserIncongruent
J’adore faire des soirées jeux. Ceux où j’excelle le plus sont les jeux de …devinettesCongruent
adresseIncongruent
Là où j’habite il y a un chiot qui est tout petit et mignon. Il aime beaucoup que je le …sorteCongruent
caresseIncongruent
J’aime beaucoup m’ouvrir à de nouvelles cultures, c’est toujours un plaisir de découvrir différentes …languesCongruent
nourrituresIncongruent
Quand je peux, je me balade dans les boutiques. Quand je trouve des objets qui me plaisent, je les …examineCongruent
toucheIncongruent
Each sentence is followed by two possible endings: The target word leads to either a congruent or incongruent context.
Table A2. List of sentences in the Main Study.
Table A2. List of sentences in the Main Study.
SentenceWordCongruencyEmotion
Ce matin quelqu’un m’a parlé de ses problèmes, j’étais …impliquéeCongruentSadness
abattueIncongruent
Hier soir la pluie tombait à verse, j’étais …mouilléeCongruentSadness
soucieuseIncongruent
Je me suis disputé avec quelqu’un, durant cette conversation j’étais …rationnelleCongruentSadness
attristéeIncongruent
Demain, je dois acheter des plantes, mon amie ne pourra pas venir avec moi, je serai …indépendanteCongruentSadness
dépriméeIncongruent
Hier, j’ai fait des erreurs lors de l’exécution d’une tâche, j’étais …impréciseCongruentSadness
malheureuseIncongruent
Quand une personne est confrontée à un problème, je suis …serviableCongruentSadness
affligéeIncongruent
Quand je regarde des films dramatiques, je suis …inspiréeCongruentSadness
moroseIncongruent
Hier, on m’a laissé à la maison car j’étais trop …lenteCongruentSadness
tristeIncongruent
J’ai lu un livre qu’un ancien ami m’avait offert, ça parlait de la philosophie grecque antique, ça m’a rendu …cultivéeCongruentSadness
mélancoliqueIncongruent
Je n’arrive pas à accomplir une tâche qu’on m’a confiée la semaine dernière, je suis …improductiveCongruentSadness
découragéeIncongruent
Hier soir, j’étais au cinéma, après cette longue séance, j’étais …ralentieCongruentHappiness
épanouieIncongruent
Demain je dois présenter un événement, on m’a choisie car je suis …efficaceCongruentHappiness
jovialeIncongruent
Quand je suis entouré de beaucoup de personnes, je suis …dynamiqueCongruentHappiness
euphoriqueIncongruent
Hier on m’a invité à participer aux tâches ménagères, j’étais …opérationnelleCongruentHappiness
ravieIncongruent
Ecouter de la musique me rend …activeCongruentHappiness
joyeuseIncongruent
La voisine nous a invité à une soirée, tout le monde m’a remarqué car j’étais …déguiséeCongruentHappiness
radieuseIncongruent
Ce matin j’ai terminé un examen en première, j’étais …rapideCongruentHappiness
heureuseIncongruent
Demain on m’a proposé de faire une balade, je suis …disponibleCongruentHappiness
enjouéeIncongruent
Hier j’ai travaillé en équipe pour organiser un événement, ça s’est très bien passé, j’étais …coopérativeCongruentHappiness
satisfaiteIncongruent
Samedi, je suis partie à l’anniversaire de ma voisine, quand je lui ai offert son cadeau, elle m’a dit que c’est exactement ce qu’elle voulait, je suis …perspicaceCongruentHappiness
contenteIncongruent
Un ami m’a donné rendez-vous chez lui à 19h pile. Il n’était pas prêt, il a dû voir que j’étais …ponctuelleCongruentAnger
exaspéréeIncongruent
Lorsque je travaille dans un environnement chaotique, je deviens …désorganiséeCongruentAnger
furieuseIncongruent
Chaque fois que le chien du voisin vient jouer dans ma cour, je suis …observatriceCongruentAnger
mécontenteIncongruent
Hier après-midi, j’ai essayé de résoudre un problème de mathématiques mais je n’ai pas réussi, j’étais …incompétenteCongruentAnger
énervéeIncongruent
Mon voisin n’a pas terminé le projet sur lequel on travaille car il avait d’autres choses à faire, donc je me suis …adaptéeCongruentAnger
fâchéeIncongruent
J’ai acheté un produit inefficace, j’ai demandé un remboursement, je suis …économeCongruentAnger
excédéeIncongruent
Lundi j’ai visité Paris, j’ai demandé la route à un passant mais il ne m’a pas répondu, j’étais …perdueCongruentAnger
contrariéeIncongruent
J’ai demandé à des passants comment utiliser le distributeur, ils ont refusé de m’aider donc j’étais …autonomeCongruentAnger
irritéeIncongruent
Un homme m’a doublé à la caisse, j’étais …passiveCongruentAnger
agacéeIncongruent
Jeudi, j’ai passé un examen, l’enseignant nous avait dit que c’était sur les statistiques bayésiennes mais ce n’était pas le cas, j’ai trouvé ça …compliquéCongruentAnger
frustrantIncongruent
Quand je dois aller faire les courses, j’ai de l’énergieénergieCongruentDisgust
aversionIncongruent
Ce matin, j’avais rendez-vous, sur le trajet quelqu’un a vomi, ça m’a …retardéCongruentDisgust
répugnéIncongruent
Des adolescents regardaient des vidéos d’araignées sur leur téléphone, j’avais envie de …observerCongruentDisgust
vomirIncongruent
Mardi soir, je suis parti au restaurant avec beaucoup de personnes, j’étais …sociableCongruentDisgust
dégoûtéeIncongruent
Samedi j’ai pris les transports en commun mais j’étais très …désorientéeCongruentDisgust
nauséeuseIncongruent
Ce midi, j’ai marché dans une flaque d’eau sale, je suis …maladroiteCongruentDisgust
écoeuréeIncongruent
Pour son mariage, ma voisine a acheté une robe verte, j’ai trouvé ça …inadéquatCongruentDisgust
immondeIncongruent
J’ai croisé un étudiant qui était en train de vomir pendant que mes amis m’appelaient, je n’ai donc pas fait attention à eux tellement j’étais …distraiteCongruentDisgust
révulséeIncongruent
J’étais au restaurant, les personnes à côté de moi mangeaient des escargots, j’ai trouvé ça …particulierCongruentDisgust
dégueulasseIncongruent
Mon voisin a acheté des chaussures, je les trouve …petitesCongruentDisgust
infâmesIncongruent
Ce matin le téléphone d’un ami allait tomber dans l’eau, j’étais …réactiveCongruentFear
affoléeIncongruent
L’ascenseur était en panne, on m’a demandé d’intervenir mais j’étais …inefficaceCongruentFear
anxieuseIncongruent
On m’a dit que la voisine était malade, je vais la contacter pour lui dire que je suis …joignableCongruentFear
inquièteIncongruent
On vient de m’inviter à explorer une maison abandonnée, je suis …partanteCongruentFear
apeuréeIncongruent
Hier, j’ai assisté à une agression, en rentrant, j’étais …vigilanteCongruentFear
terroriséeIncongruent
Durant les vacances, j’étais sur le bord d’une falaise, les vagues s’écrasaient contre la paroi rocheuse, j’ai trouvé ça très …beauCongruentFear
effrayantIncongruent
Samedi nous avons pris l’avion, durant tout le vol, j’étais …silencieuseCongruentFear
paniquéeIncongruent
La semaine dernière, j’étais dans une pièce sombre et je voyais des ombres bouger, j’étais …prudenteCongruentFear
craintiveIncongruent
J’ai regardé un documentaire sur les catastrophes naturelles, je suis devenue plus …instruiteCongruentFear
angoisséeIncongruent
Demain je vais aider des inconnus à organiser une soirée, je suis …profitableCongruentFear
terrifiéeIncongruent
Lors de situations complexes et changeantes, je suis …flexibleCongruentSurprise
étonnéeIncongruent
Samedi j’ai perdu mes affaires, je suis …désordonnéeCongruentSurprise
scandaliséeIncongruent
Jeudi, je suis montée sur scène et je me suis rendu compte que j’avais oublié mon discours donc j’étais …concentréeCongruentSurprise
stupéfaiteIncongruent
J’ai assisté à une conférence sur l’histoire de la physique, un groupe faisait beaucoup de bruit dans le fond, j’étais …déconcentréeCongruentSurprise
sidéréeIncongruent
Vendredi on m’a annoncé le décès de ma voisine, je n’ai pas pu aller à l’enterrement car j’étais trop …occupéeCongruentSurprise
effaréeIncongruent
J’étais en voiture avec mon voisin, à un moment il est allé très vite, ça m’a …secouéCongruentSurprise
épatéIncongruent
Quand j’ai entendu le musicien jouer du piano, j’étais …attentiveCongruentSurprise
blufféeIncongruent
Ma voisine m’a poussé dans la piscine sans me prévenir, ça m’a …trempéCongruentSurprise
traumatiséIncongruent
Quand je dois faire face à des changements inattendues, je suis …robusteCongruentSurprise
abasourdieIncongruent
J’ai gagné un tournoi d’échec, je suis …intelligenteCongruentSurprise
surpriseIncongruent
Each sentence is followed by two possible endings: The target word leads to either a congruent or incongruent context.

References

  1. Fitrianie, S.; Bruijnes, M.; Richards, D.; Abdulrahman, A.; Brinkman, W.P. What are We Measuring Anyway?—A Literature Survey of Questionnaires Used in Studies Reported in the Intelligent Virtual Agent Conferences. In Proceedings of the 19th ACM International Conference on Intelligent Virtual Agents, Paris, France, 2–5 July 2019; IVA ’19. pp. 159–161. [Google Scholar] [CrossRef]
  2. Tomasello, M. Why We Cooperate; A Boston Review Book; OCLC: Ocn319498167; MIT Press: Cambridge, MA, USA, 2009. [Google Scholar]
  3. Johnson, Z.V.; Young, L.J. Neurobiological mechanisms of social attachment and pair bonding. Curr. Opin. Behav. Sci. 2015, 3, 38–44. [Google Scholar] [CrossRef] [PubMed]
  4. Nazir, T.A.; Lebrun, B.; Li, B. Improving the acceptability of social robots: Make them look different from humans. PLoS ONE 2023, 18, e0287507. [Google Scholar] [CrossRef] [PubMed]
  5. Walum, H.; Young, L.J. The neural mechanisms and circuitry of the pair bond. Nat. Rev. Neurosci. 2018, 19, 643–654. [Google Scholar] [CrossRef] [PubMed]
  6. Breazeal, C. Emotion and sociable humanoid robots. Int. J. Hum. Comput. Stud. 2003, 59, 119–155. [Google Scholar] [CrossRef]
  7. Duffy, B.R. Anthropomorphism and the social robot. Robot. Auton. Syst. 2003, 42, 177–190. [Google Scholar] [CrossRef]
  8. Fong, T.; Nourbakhsh, I.; Dautenhahn, K. A survey of socially interactive robots. Robot. Auton. Syst. 2003, 42, 143–166. [Google Scholar] [CrossRef]
  9. Baumeister, R.F.; Leary, M.R. The need to belong: Desire for interpersonal attachments as a fundamental human motivation. Psychol. Bull. 1995, 117, 497–529. [Google Scholar] [CrossRef] [PubMed]
  10. Decety, J.; Jackson, P.L. The Functional Architecture of Human Empathy. Behav. Cogn. Neurosci. Rev. 2004, 3, 71–100. [Google Scholar] [CrossRef] [PubMed]
  11. Epley, N.; Waytz, A.; Cacioppo, J.T. On seeing human: A three-factor theory of anthropomorphism. Psychol. Rev. 2007, 114, 864–886. [Google Scholar] [CrossRef]
  12. Wykowska, A. Social Robots to Test Flexibility of Human Social Cognition. Int. J. Soc. Robot. 2020, 12, 1203–1211. [Google Scholar] [CrossRef]
  13. Li, J.J.; Ju, W.; Reeves, B. Touching a Mechanical Body: Tactile Contact with Body Parts of a Humanoid Robot Is Physiologically Arousing. J. Hum. Robot Interact. 2017, 6, 118. [Google Scholar] [CrossRef]
  14. Hegel, F.; Krach, S.; Kircher, T.; Wrede, B.; Sagerer, G. Understanding social robots: A user study on anthropomorphism. In Proceedings of the RO-MAN 2008—The 17th IEEE International Symposium on Robot and Human Interactive Communication, Munich, Germany, 1–3 August 2008; pp. 574–579. [Google Scholar] [CrossRef]
  15. Kahn, P.H.; Kanda, T.; Ishiguro, H.; Gill, B.T.; Ruckert, J.H.; Shen, S.; Gary, H.E.; Reichert, A.L.; Freier, N.G.; Severson, R.L. Do people hold a humanoid robot morally accountable for the harm it causes? In Proceedings of the Seventh Annual ACM/IEEE International Conference on Human-Robot Interaction, Boston, MA, USA, 5–8 March 2012; pp. 33–40. [Google Scholar] [CrossRef]
  16. Chaminade, T.; Zecca, M.; Blakemore, S.J.; Takanishi, A.; Frith, C.D.; Micera, S.; Dario, P.; Rizzolatti, G.; Gallese, V.; Umiltà, M.A. Brain Response to a Humanoid Robot in Areas Implicated in the Perception of Human Emotional Gestures. PLoS ONE 2010, 5, e11577. [Google Scholar] [CrossRef] [PubMed]
  17. Urgen, B.A.; Plank, M.; Ishiguro, H.; Poizner, H.; Saygin, A.P. EEG theta and Mu oscillations during perception of human and robot actions. Front. Neurorobotics 2013, 7, 19. [Google Scholar] [CrossRef] [PubMed]
  18. Fraune, M.R. Our Robots, Our Team: Robot Anthropomorphism Moderates Group Effects in Human–Robot Teams. Front. Psychol. 2020, 11, 1275. [Google Scholar] [CrossRef] [PubMed]
  19. Bartneck, C.; Keijsers, M. The morality of abusing a robot. Paladyn J. Behav. Robot. 2020, 11, 271–283. [Google Scholar] [CrossRef]
  20. Hagoort, P.; van Berkum, J. Beyond the sentence given. Philos. Trans. R. Soc. Lond. Ser. B Biol. Sci. 2007, 362, 801–811. [Google Scholar] [CrossRef] [PubMed]
  21. Kutas, M.; Hillyard, S.A. Reading Senseless Sentences: Brain Potentials Reflect Semantic Incongruity. Science 1980, 207, 203–205. [Google Scholar] [CrossRef] [PubMed]
  22. Osterhout, L.; Holcomb, P.J. Event-related brain potentials elicited by syntactic anomaly. J. Mem. Lang. 1992, 31, 785–806. [Google Scholar] [CrossRef]
  23. Brown, C.; Hagoort, P. The Processing Nature of the N400: Evidence from Masked Priming. J. Cogn. Neurosci. 1993, 5, 34–44. [Google Scholar] [CrossRef]
  24. Luck, S.J. An Introduction to the Event-Related Potential Technique; Cognitive Neuroscience; OCLC: Ocm57574045; MIT Press: Cambridge, MA, USA, 2005. [Google Scholar]
  25. Reid, V.M.; Striano, T. N400 involvement in the processing of action sequences. Neurosci. Lett. 2008, 433, 93–97. [Google Scholar] [CrossRef]
  26. van Elk, M.; van Schie, H.; Bekkering, H. Semantics in action: An electrophysiological study on the use of semantic knowledge for action. J. Physiol. Paris 2008, 102, 95–100. [Google Scholar] [CrossRef]
  27. Friedrich, M.; Friederici, A.D. N400-like Semantic Incongruity Effect in 19-Month-Olds: Processing Known Words in Picture Contexts. J. Cogn. Neurosci. 2004, 16, 1465–1477. [Google Scholar] [CrossRef] [PubMed]
  28. Hamm, J.P.; Johnson, B.W.; Kirk, I.J. Comparison of the N300 and N400 ERPs to picture stimuli in congruent and incongruent contexts. Clin. Neurophysiol. 2002, 113, 1339–1350. [Google Scholar] [CrossRef]
  29. van Berkum, J.J.A.; Hagoort, P.; Brown, C.M. Semantic Integration in Sentences and Discourse: Evidence from the N400. J. Cogn. Neurosci. 1999, 11, 657–671. [Google Scholar] [CrossRef]
  30. van Berkum, J.J.; Zwitserlood, P.; Hagoort, P.; Brown, C.M. When and how do listeners relate a sentence to the wider discourse? Evidence from the N400 effect. Cogn. Brain Res. 2003, 17, 701–718. [Google Scholar] [CrossRef] [PubMed]
  31. Stanton, A. WALL·E, 2008. IMDb ID: Tt0910970 Event-Location: United States, Japan. Available online: https://playitagain.info/site/wall·e/ (accessed on 10 March 2024).
  32. Bay, M. Transformers, 2007. IMDb ID: Tt0418279 Event-Location: United States. Available online: https://www.pinterest.com/pin/the-all-sparksector-seven-i-am-megatron-scene-transformers2007-movie-clip-bluray-hd–658792251752888543/ (accessed on 10 March 2024).
  33. Oldfield, R. The assessment and analysis of handedness: The Edinburgh inventory. Neuropsychologia 1971, 9, 97–113. [Google Scholar] [CrossRef]
  34. Palmer, J.A.; Makeig, S.; Kreutz-Delgado, K.; Rao, B.D. Newton method for the ICA mixture model. In Proceedings of the 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, Las Vegas, NV, USA, 30 March–4 April 2008; pp. 1805–1808. [Google Scholar] [CrossRef]
  35. Delorme, A.; Makeig, S. EEGLAB: An open source toolbox for analysis of single-trial EEG dynamics including independent component analysis. J. Neurosci. Methods 2004, 134, 9–21. [Google Scholar] [CrossRef] [PubMed]
  36. Gramfort, A.; Luessi, M.; Larson, E.; Engemann, D.; Strohmeier, D.; Brodbeck, C.; Goj, R.; Jas, M.; Brooks, T.; Parkkonen, L.; et al. MEG and EEG data analysis with MNE-Python. Front. Neurosci. 2013, 7, 267. [Google Scholar] [CrossRef] [PubMed]
  37. New, B.; Pallier, C.; Ferrand, L.; Matos, R. Une base de données lexicales du français contemporain sur internet: LEXIQUE™. A lexical database for contemporary french: LEXIQUE™. L’Année Psychol. 2001, 101, 447–462. [Google Scholar] [CrossRef]
  38. Desroches, A.S.; Newman, R.L.; Joanisse, M.F. Investigating the Time Course of Spoken Word Recognition: Electrophysiological Evidence for the Influences of Phonological Similarity. J. Cogn. Neurosci. 2009, 21, 1893–1906. [Google Scholar] [CrossRef]
  39. Marslen-Wilson, W.D.; Welsh, A. Processing interactions and lexical access during word recognition in continuous speech. Cogn. Psychol. 1978, 10, 29–63. [Google Scholar] [CrossRef]
  40. Peirce, J.; Hirst, R.; MacAskill, M. Building Experiments in PsychoPy, 2nd ed.; SAGE: Los Angeles, CA, USA; London, UK; New Delhi, India; Singapore; Washington, DC, USA; Melbourne, Australia, 2022. [Google Scholar]
  41. Ho, C.C.; MacDorman, K.F. Revisiting the uncanny valley theory: Developing and validating an alternative to the Godspeed indices. Comput. Hum. Behav. 2010, 26, 1508–1518. [Google Scholar] [CrossRef]
  42. Haslam, N.; Bastian, B.; Bissett, M. Essentialist Beliefs about Personality and Their Implications. Personal. Soc. Psychol. Bull. 2004, 30, 1661–1673. [Google Scholar] [CrossRef] [PubMed]
  43. Gelman, S.A. The Essential Child: Origins of Essentialism in Everyday Thought; Oxford University Press: Oxford, UK, 2003. [Google Scholar] [CrossRef]
  44. Gigandet, R.; Dutoit, X.; Li, B.; Diana, M.C.; Nazir, T.A. The “Eve effect bias”: Epistemic Vigilance and Human Belief in Concealed Capacities of Social Robots. In Proceedings of the 2023 IEEE International Conference on Advanced Robotics and Its Social Impacts (ARSO), Berlin, Germany, 5–7 June 2023; pp. 15–20. [Google Scholar] [CrossRef]
  45. Hagoort, P.; Hald, L.; Bastiaansen, M.; Petersson, K.M. Integration of Word Meaning and World Knowledge in Language Comprehension. Science 2004, 304, 438–441. [Google Scholar] [CrossRef] [PubMed]
  46. van Berkum, J.J.A.; van Den Brink, D.; Tesink, C.M.J.Y.; Kos, M.; Hagoort, P. The Neural Integration of Speaker and Message. J. Cogn. Neurosci. 2008, 20, 580–591. [Google Scholar] [CrossRef] [PubMed]
  47. Spatola, N.; Wudarczyk, O.A. Ascribing emotions to robots: Explicit and implicit attribution of emotions and perceived robot anthropomorphism. Comput. Hum. Behav. 2021, 124, 106934. [Google Scholar] [CrossRef]
  48. Leyens, J.P.; Paladino, P.M.; Rodriguez-Torres, R.; Vaes, J.; Demoulin, S.; Rodriguez-Perez, A.; Gaunt, R. The Emotional Side of Prejudice: The Attribution of Secondary Emotions to Ingroups and Outgroups. Personal. Soc. Psychol. Rev. 2000, 4, 186–197. [Google Scholar] [CrossRef]
  49. Leyens, J.P.; Rodriguez-Perez, A.; Rodriguez-Torres, R.; Gaunt, R.; Paladino, M.P.; Vaes, J.; Demoulin, S. Psychological essentialism and the differential attribution of uniquely human emotions to ingroups and outgroups. Eur. J. Soc. Psychol. 2001, 31, 395–411. [Google Scholar] [CrossRef]
  50. Viki, G.T.; Winchester, L.; Titshall, L.; Chisango, T.; Pina, A.; Russell, R. Beyond Secondary Emotions: The Infrahumanization of Outgroups Using Human–Related and Animal–Related Words. Soc. Cogn. 2006, 24, 753–775. [Google Scholar] [CrossRef]
  51. Stock-Homburg, R. Survey of Emotions in Human–Robot Interactions: Perspectives from Robotic Psychology on 20 Years of Research. Int. J. Soc. Robot. 2022, 14, 389–411. [Google Scholar] [CrossRef]
  52. Spatola, N.; Wudarczyk, O.A. Implicit Attitudes Towards Robots Predict Explicit Attitudes, Semantic Distance Between Robots and Humans, Anthropomorphism, and Prosocial Behavior: From Attitudes to Human–Robot Interaction. Int. J. Soc. Robot. 2021, 13, 1149–1159. [Google Scholar] [CrossRef]
  53. Matheson, E.; Minto, R.; Zampieri, E.G.G.; Faccio, M.; Rosati, G. Human–Robot Collaboration in Manufacturing Applications: A Review. Robotics 2019, 8, 100. [Google Scholar] [CrossRef]
  54. Levenson, R.W. Human emotion: A functional view. In The Nature of Emotion: Fundamental Questions; Series in Affective, Science; Ekman, P., Davidson, R.J., Eds.; Oxford University Press: New York, NY, USA, 1994; pp. 123–126. [Google Scholar]
  55. Christov-Moore, L.; Simpson, E.A.; Coudé, G.; Grigaityte, K.; Iacoboni, M.; Ferrari, P.F. Empathy: Gender effects in brain and behavior. Neurosci. Biobehav. Rev. 2014, 46, 604–627. [Google Scholar] [CrossRef] [PubMed]
  56. Nomura, T.; Suzuki, T.; Kanda, T.; Kato, K. Measurement of negative attitudes toward robots. Interact. Stud. Soc. Behav. Commun. Biol. Artif. Syst. 2006, 7, 437–454. [Google Scholar] [CrossRef]
  57. Nomura, T.; Kanda, T.; Kidokoro, H.; Suehiro, Y.; Yamada, S. Why do children abuse robots? Interact. Stud. Soc. Behav. Commun. Biol. Artif. Syst. 2016, 17, 347–369. [Google Scholar] [CrossRef]
  58. Nomura, T.; Syrdal, D.S.; Dautenhahn, K. Differences on social acceptance of humanoid robots between Japan and the UK. In Proceedings of the 4th Int Symposium on New Frontiers in Human-Robot Interaction, Canterbury, UK, 21–22 April 2015. [Google Scholar]
  59. Bartneck, C.; Nomura, T.; Kanda, T.; Tomohiro, S.; Kennsuke, K. A cross-cultural study on attitudes towards robots. In Proceedings of the HCI International 2005, Las Vegas, NV, USA, 22–27 July 2005; Salvendy, G., Ed.; [Google Scholar] [CrossRef]
Figure 1. Screenshot for each condition of the robot body presentation. Panel (a): BODY condition. Panel (b): HEAD condition.
Figure 1. Screenshot for each condition of the robot body presentation. Panel (a): BODY condition. Panel (b): HEAD condition.
Robotics 13 00067 g001
Figure 2. Electrodes of interests and Experimental room setup. Panel (a): Electrodes of interest. Panel (b): EEG experimental room setup.
Figure 2. Electrodes of interests and Experimental room setup. Panel (a): Electrodes of interest. Panel (b): EEG experimental room setup.
Robotics 13 00067 g002
Figure 3. Experimental trial sequence.
Figure 3. Experimental trial sequence.
Robotics 13 00067 g003
Figure 4. Grand average ERP waveforms from Cz and Pz electrode sites, for utterances that were congruent (solid gray lines) or incongruent (dashed black lines) with respect to the physical appearance of the robot. Panel (a): Cz electrode for the control (left) and experimental (right) conditions. Panel (b): Pz electrode.
Figure 4. Grand average ERP waveforms from Cz and Pz electrode sites, for utterances that were congruent (solid gray lines) or incongruent (dashed black lines) with respect to the physical appearance of the robot. Panel (a): Cz electrode for the control (left) and experimental (right) conditions. Panel (b): Pz electrode.
Robotics 13 00067 g004
Figure 5. Mean amplitudes across the 13 representative electrodes during the 500–700 ms window post-stimulus onset. Error bars correspond to 95% confidence intervals. (Left) panel: Results for HEAD group. (Right) panel: results for BODY group.
Figure 5. Mean amplitudes across the 13 representative electrodes during the 500–700 ms window post-stimulus onset. Error bars correspond to 95% confidence intervals. (Left) panel: Results for HEAD group. (Right) panel: results for BODY group.
Robotics 13 00067 g005
Figure 6. Interaction diagram showing the effect of Condition (congruent vs. incongruent) and Group (BODY vs. HEAD) on the N400 amplitude.
Figure 6. Interaction diagram showing the effect of Condition (congruent vs. incongruent) and Group (BODY vs. HEAD) on the N400 amplitude.
Robotics 13 00067 g006
Figure 7. Average scores and corresponding 95% confidence intervals for the five statements that explored perceptions of the robot.
Figure 7. Average scores and corresponding 95% confidence intervals for the five statements that explored perceptions of the robot.
Robotics 13 00067 g007
Figure 8. Boxplots of the participants’ ratings of the robot Humanness, Eeriness, and Attractiveness from the Ho and MacDorman questionnaire [41] together with corresponding 95% confidence intervals.
Figure 8. Boxplots of the participants’ ratings of the robot Humanness, Eeriness, and Attractiveness from the Ho and MacDorman questionnaire [41] together with corresponding 95% confidence intervals.
Robotics 13 00067 g008
Figure 9. Mean amplitudes across the 13 representative electrodes during the 500–700 ms window post-stimulus onset. Error bars correspond to 95% confidence intervals. (Left) panel: results for “believers”. (Right) panel: results for “non-believers”.
Figure 9. Mean amplitudes across the 13 representative electrodes during the 500–700 ms window post-stimulus onset. Error bars correspond to 95% confidence intervals. (Left) panel: results for “believers”. (Right) panel: results for “non-believers”.
Robotics 13 00067 g009
Figure 10. Interaction diagram showing the effect of Condition (congruent vs. incongruent) and subgroup (believers vs. non-believers) on the N400 amplitude. Each point represents the average amplitude for a given condition and subgroup.
Figure 10. Interaction diagram showing the effect of Condition (congruent vs. incongruent) and subgroup (believers vs. non-believers) on the N400 amplitude. Each point represents the average amplitude for a given condition and subgroup.
Robotics 13 00067 g010
Figure 11. Average scores and corresponding 95% confidence intervals for the five statements that explored perceptions of the robot in the “believers” and “non-believers” subgroups.
Figure 11. Average scores and corresponding 95% confidence intervals for the five statements that explored perceptions of the robot in the “believers” and “non-believers” subgroups.
Robotics 13 00067 g011
Figure 12. Boxplots of the participants’ subgroups perceptions of the robot Humanness, Eeriness, and Attractiveness from the items of Ho and MacDorman questionnaire [41], with corresponding 95% confidence intervals.
Figure 12. Boxplots of the participants’ subgroups perceptions of the robot Humanness, Eeriness, and Attractiveness from the items of Ho and MacDorman questionnaire [41], with corresponding 95% confidence intervals.
Robotics 13 00067 g012
Figure 13. Screenshot for each condition of the agent presentation. Panel (a): ROBOT condition. Panel (b): HUMAN condition.
Figure 13. Screenshot for each condition of the agent presentation. Panel (a): ROBOT condition. Panel (b): HUMAN condition.
Robotics 13 00067 g013
Figure 14. Grand average ERP waveforms from Cz and Pz electrode sites, for utterances that were congruent (solid gray lines) or incongruent (dashed black lines) with respect to the agent condition. Panel (a): Cz electrode for the control (left) and experimental (right) conditions. Panel (b): Pz electrode.
Figure 14. Grand average ERP waveforms from Cz and Pz electrode sites, for utterances that were congruent (solid gray lines) or incongruent (dashed black lines) with respect to the agent condition. Panel (a): Cz electrode for the control (left) and experimental (right) conditions. Panel (b): Pz electrode.
Robotics 13 00067 g014
Figure 15. Mean amplitudes across the 13 representative electrodes during the 500–700 ms window post-stimulus onset. Error bars correspond to 95% confidence intervals. (Left) panel: Results for HUMAN group. (Right) panel: Results for ROBOT group.
Figure 15. Mean amplitudes across the 13 representative electrodes during the 500–700 ms window post-stimulus onset. Error bars correspond to 95% confidence intervals. (Left) panel: Results for HUMAN group. (Right) panel: Results for ROBOT group.
Robotics 13 00067 g015
Figure 16. Interaction diagram showing the significant effect of ‘Condition’ and ‘Group’ on amplitude. Each point represents the average amplitude for a given condition and group.
Figure 16. Interaction diagram showing the significant effect of ‘Condition’ and ‘Group’ on amplitude. Each point represents the average amplitude for a given condition and group.
Robotics 13 00067 g016
Figure 17. Average scores and corresponding 95% confidence intervals for the five statements that explored perceptions of the agent’s abilities.
Figure 17. Average scores and corresponding 95% confidence intervals for the five statements that explored perceptions of the agent’s abilities.
Robotics 13 00067 g017
Figure 18. Boxplots of the participants’ perceptions of the agent Humanness, Eeriness, and Attractiveness from the items of Ho and MacDorman questionnaire [41], with corresponding 95% confidence intervals.
Figure 18. Boxplots of the participants’ perceptions of the agent Humanness, Eeriness, and Attractiveness from the items of Ho and MacDorman questionnaire [41], with corresponding 95% confidence intervals.
Robotics 13 00067 g018
Figure 19. Mean amplitudes across the 13 representative electrodes during the 500–700 ms window post-stimulus onset. Error bars correspond to 95% confidence intervals. (Left) panel: results for participants who strongly believed that the robot might possess emotions (Right) panel: participants who were skeptical about the robot’s capacity to feel emotions.
Figure 19. Mean amplitudes across the 13 representative electrodes during the 500–700 ms window post-stimulus onset. Error bars correspond to 95% confidence intervals. (Left) panel: results for participants who strongly believed that the robot might possess emotions (Right) panel: participants who were skeptical about the robot’s capacity to feel emotions.
Robotics 13 00067 g019
Figure 20. Interaction diagram showing the significant effect of ‘Condition’ and ‘Subgroup’ on amplitude. Each point represents the average amplitude for a given condition and subgroup.
Figure 20. Interaction diagram showing the significant effect of ‘Condition’ and ‘Subgroup’ on amplitude. Each point represents the average amplitude for a given condition and subgroup.
Robotics 13 00067 g020
Figure 21. Average scores and corresponding 95% confidence intervals for the five statements that explored perceptions of the robot in the subgroups.
Figure 21. Average scores and corresponding 95% confidence intervals for the five statements that explored perceptions of the robot in the subgroups.
Robotics 13 00067 g021
Figure 22. Boxplots of the participants’ subgroups perceptions of the robot Humanness, Eeriness, and Attractiveness from the items of Ho and MacDorman questionnaire [41], with corresponding 95% confidence intervals.
Figure 22. Boxplots of the participants’ subgroups perceptions of the robot Humanness, Eeriness, and Attractiveness from the items of Ho and MacDorman questionnaire [41], with corresponding 95% confidence intervals.
Robotics 13 00067 g022
Table 1. Characteristics of Target Words in the Pilot Study: Linguistic and Phonological Measures.
Table 1. Characteristics of Target Words in the Pilot Study: Linguistic and Phonological Measures.
Target-Word Congruence
MeasureIncongruentCongruentMann–Whitney
Phonological Uniqueness Point (puphon)4.967 (SD = 1.437)5.683 (SD = 1.556)U = 1273.5, p = 0.004
Lemma Frequency in Films (freqlemfilms2)70.389 (SD = 131.578)161.741 (SD = 398.826)U = 1505.0, p = 0.122
Word Frequency in Films (freqfilms2)20.110 (SD = 42.478)41.921 (SD = 135.192)U = 1585.5, p = 0.261
Lemma Frequency in Books (freqlemlivres)84.429 (SD = 135.024)121.397 (SD = 210.050)U = 1531.0, p = 0.158
Word Frequency in Books (freqlivres)23.140 (SD = 38.771)28.730 (SD = 58.149)U = 1602.0, p = 0.299
Number of Orthographic Neighbors (voisorth)3.650 (SD = 3.512)2.533 (SD = 2.541)U = 2084.0, p = 0.132
Number of Phonological Neighbors (voisphon)8.000 (SD = 7.415)5.217 (SD = 4.865)U = 2127.5, p = 0.084
Number of Syllables (nbsyll)2.250 (SD = 0.751)2.650 (SD = 0.732)U = 1280.0, p = 0.003
Number of Letters (nblettres)7.050 (SD = 1.881)7.883 (SD = 1.914)U = 1323.5, p = 0.011
Number of Phonemes (nbphons)5.167 (SD = 1.607)6.000 (SD = 1.636)U = 1234.0, p = 0.002
Cloze probability50.000 (SD = 11.058)50.000 (SD = 11.058)U = 1788.0, p = 0.951
Word types
    Nouns3 (5%)
    Verbs18 (30%)
    Adjectives39 (64%)
SD indicates standard deviation. U and p values are from the Mann–Whitney test for comparing incongruent and congruent groups. Frequency of use of words within the French language: the most commonly used words were chosen to facilitate recognition and understanding by the participants. The frequency of the lemma in films and books: words whose canonical form (lemma) appears frequently in films and books were preferred, as participants are likely to be more familiar with these words. Number of syllables: words with fewer syllables were preferred, as they are generally easier and quicker to understand. The number of orthographic and phonological neighbors: words with fewer neighbors (i.e., words that are similar in spelling or pronunciation) were preferred.
Table 2. Composite Scores for Humanness, Attractiveness and Eeriness Indices in the Pilot Study.
Table 2. Composite Scores for Humanness, Attractiveness and Eeriness Indices in the Pilot Study.
Indicators
GroupHumannessEerinessAttractiveness
HeadM = 28.812, SD = 14.979M = 41.958, SD = 11.186M = 65.467, SD = 18.875
BodyM = 32.250, SD = 14.385M = 42.484, SD = 13.705M = 76.113, SD = 13.754
Table 3. Scores about perceived hidden limbs.
Table 3. Scores about perceived hidden limbs.
Hidden Limbs
GroupArmsLegsArms & Legs Mean
HeadM = 31.81, SD = 35.78, Mdn = 18.00,
min = 0, max = 100
M = 20.81, SD = 27.27, Mdn = 7.00,
min = 0, max = 80
M = 26.31, SD = 28.93, Mdn = 19.50,
min = 0, max = 86
BodyM = 55.00, SD = 37.63, Mdn = 63.50,
min = 0, max = 100
M = 41.08, SD = 34.60, Mdn = 42.50,
min = 0, max = 100
M = 48.04, SD = 33.27, Mdn = 50.00,
min = 0, max = 100
‘Group’ means the part (whole body vs. head only) participants were able to see.
Table 4. Composite Scores for Humanness, Attractiveness and Eeriness Indices in the Pilot Study subgroup beliefs.
Table 4. Composite Scores for Humanness, Attractiveness and Eeriness Indices in the Pilot Study subgroup beliefs.
Indicators
GroupHumannessEerinessAttractiveness
Non-believersM = 29.653, SD = 14.250M = 42.927, SD = 9.132M = 71.983, SD = 11.895
BelieversM = 32.708, SD = 14.903M = 43.656, SD = 19.680M = 74.950, SD = 15.262
Table 5. Characteristics of Target Words in the Main Study: Linguistic and Phonological Measures.
Table 5. Characteristics of Target Words in the Main Study: Linguistic and Phonological Measures.
Target-Word Congruence
MeasureIncongruentCongruentMann–Whitney
Phonological Uniqueness Point (puphon)5.983 (SD = 1.384)6.767 (SD = 1.760)U = 1296.0, p = 0.007
Lemma Frequency in Films (freqlemfilms2)15.853 (SD = 41.232)40.021 (SD = 164.579)U = 1526.5, p = 0.151
Word Frequency in Films (freqfilms2)6.109 (SD = 17.514)10.004 (SD = 37.354)U = 1459.0, p = 0.073
Lemma Frequency in Books (freqlemlivres)18.304 (SD = 30.295)52.965 (SD = 212.000)U = 1948.0, p = 0.438
Word Frequency in Books (freqlivres)6.205 (SD = 13.655)12.887 (SD = 39.819)U = 1789.5, p = 0.958
Number of Orthographic Neighbors (voisorth)1.483 (SD = 1.444)1.400 (SD = 1.976)U = 2018.5, p = 0.235
Number of Phonological Neighbors (voisphon)3.300 (SD = 3.077)2.667 (SD = 4.620)U = 2293.5, p = 0.008
Number of Syllables (nbsyll)2.750 (SD = 0.680)2.917 (SD = 0.907)U = 1637.5, p = 0.356
Number of Letters (nblettres)8.117 (SD = 1.574)8.867 (SD = 2.054)U = 1407.5, p = 0.036
Number of Phonemes (nbphons)6.117 (SD = 1.354)7.050 (SD = 1.872)U = 1219.5, p = 0.001
Cloze probability49 (SD = 6.371)51 (SD = 6.575)U = 1425.5, p = 0.047
Word types
    Nouns1 (2%)
    Verbs13 (22%)
    Adjectives46 (76%)
Table 6. Composite Scores for Humanness, Attractiveness and Eeriness Indices in the Main Study.
Table 6. Composite Scores for Humanness, Attractiveness and Eeriness Indices in the Main Study.
Indicators
GroupHumannessEerinessAttractiveness
HumanM = 66.807, SD = 22.330M = 38.475, SD = 16.401M = 62.328, SD = 13.321
RobotM = 32.153, SD = 18.798M = 41.635, SD = 14.219M = 64.384, SD = 12.765
Table 7. Scores about perceived emotional capacities.
Table 7. Scores about perceived emotional capacities.
Emotional Capacity
GroupExpress EmotionsFeel Emotions
HumanM = 75.00, SD = 30.05, Mdn = 87.00,
min = 0, max = 100
M = 79.64, SD = 28.51, Mdn = 92.00,
min = 0, max = 100
RobotM = 80.88, SD = 22.87, Mdn = 92.00,
min = 20, max = 100
M = 52.68, SD = 36.07, Mdn = 43.00,
min = 0, max = 100
Group means the Agent (human vs. robot) participants were able to see.
Table 8. Composite Scores for Humanness, Attractiveness and Eeriness Indices in the Main Study.
Table 8. Composite Scores for Humanness, Attractiveness and Eeriness Indices in the Main Study.
Indicators
GroupHumannessEerinessAttractiveness
Non-BelieversM = 22.985, SD = 15.421M = 38.295, SD = 12.947M = 65.618, SD = 14.101
BelieversM = 38.894, SD = 20.292M = 48.591, SD = 13.721M = 61.073, SD = 11.529
‘Group’ means the belief (participants who strongly believed that the robot might possess emotions vs. those who were skeptical about the robot’s capacity to feel emotions).
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Gigandet, R.; Diana, M.C.; Ouadada, K.; Nazir, T.A. Beyond Explicit Acknowledgment: Brain Response Evidence of Human Skepticism towards Robotic Emotions. Robotics 2024, 13, 67. https://doi.org/10.3390/robotics13050067

AMA Style

Gigandet R, Diana MC, Ouadada K, Nazir TA. Beyond Explicit Acknowledgment: Brain Response Evidence of Human Skepticism towards Robotic Emotions. Robotics. 2024; 13(5):67. https://doi.org/10.3390/robotics13050067

Chicago/Turabian Style

Gigandet, Robin, Maria C. Diana, Kenza Ouadada, and Tatjana A. Nazir. 2024. "Beyond Explicit Acknowledgment: Brain Response Evidence of Human Skepticism towards Robotic Emotions" Robotics 13, no. 5: 67. https://doi.org/10.3390/robotics13050067

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop