Front. Robot. AI Frontiers in Robotics and AI Front. Robot. AI 2296-9144 Frontiers Media S.A. 1241519 10.3389/frobt.2023.1241519 Robotics and AI Original Research “Ick bin een Berlina”: dialect proficiency impacts a robot’s trustworthiness and competence evaluation Kühne et al. 10.3389/frobt.2023.1241519 Kühne Katharina 1 * Herbold Erika 1 Bendel Oliver 2 Zhou Yuefang 1 Fischer Martin H. 1 1 Division of Cognitive Sciences, University of Potsdam, Potsdam, Germany 2 School of Business FHNW, Brugg-Windisch, Brugg, Switzerland

Edited by: Karolina Zawieska, Aarhus University, Denmark

Reviewed by: Francesca Fracasso, Consiglio Nazionale delle Ricerche (ISTC-CNR), Italy

Bing Li, UMR9193 Laboratoires Sciences Cognitives et Sciences Affectives (SCALab), France

*Correspondence: Katharina Kühne, kkuehne@uni-potsdam.de
29 01 2024 2023 10 1241519 16 06 2023 27 11 2023 Copyright © 2024 Kühne, Herbold, Bendel, Zhou and Fischer. 2024 Kühne, Herbold, Bendel, Zhou and Fischer

This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

Background: Robots are increasingly used as interaction partners with humans. Social robots are designed to follow expected behavioral norms when engaging with humans and are available with different voices and even accents. Some studies suggest that people prefer robots to speak in the user’s dialect, while others indicate a preference for different dialects.

Methods: Our study examined the impact of the Berlin dialect on perceived trustworthiness and competence of a robot. One hundred and twenty German native speakers (M age = 32 years, SD = 12 years) watched an online video featuring a NAO robot speaking either in the Berlin dialect or standard German and assessed its trustworthiness and competence.

Results: We found a positive relationship between participants’ self-reported Berlin dialect proficiency and trustworthiness in the dialect-speaking robot. Only when controlled for demographic factors, there was a positive association between participants’ dialect proficiency, dialect performance and their assessment of robot’s competence for the standard German-speaking robot. Participants’ age, gender, length of residency in Berlin, and device used to respond also influenced assessments. Finally, the robot’s competence positively predicted its trustworthiness.

Discussion: Our results inform the design of social robots and emphasize the importance of device control in online experiments.

competence dialect human-robot interaction robot voice social robot trust section-at-acceptance Human-Robot Interaction

香京julia种子在线播放

    1. <form id=HxFbUHhlv><nobr id=HxFbUHhlv></nobr></form>
      <address id=HxFbUHhlv><nobr id=HxFbUHhlv><nobr id=HxFbUHhlv></nobr></nobr></address>

      1 Introduction 1.1 Factors influencing robot’s acceptance

      Social robots are becoming more common in various social aspects of human life, such as providing interpersonal care, tutoring, and companionship (Belpaeme et al., 2018; Bendel, 2021; Breazeal, 2017; Broadbent, 2017; Zhou and Fischer, 2019; for review, see e.g., Cifuentes et al., 2020; Woo et al., 2021; Henschel et al., 2021). Unlike most manufacturing or surgical robots, a social robot is designed to have a physical body and interact with humans in a way that aligns with human behavioral expectations (Bartneck and Forlizzi, 2004). Specifically, a humanoid robot is a type of a social robot with a body shape resembling a human, including a head, two arms, and two legs (Broadbent, 2017). According to Bendel (2021), social robots are sensorimotor machines created to interact with humans or animals. They can be identified through five key aspects. These are non-verbal interaction with living beings, verbal communication with living beings, representation of (aspects of or features of) living beings (e.g., they have an animaloid or a humanoid appearance or natural language abilities), proximity to living beings, and their utility or benefit for living beings. The assumption is that an entity is a social robot if four of these five dimensions are met. It can be hypothesized that the ability to speak and the voice used are likely to be among the central features of social robots. The present study focused on the role of speech to better understand social interactions with robots.

      Which factors affect whether a person accepts a robot as a social interaction partner? Some of these factors include human-related aspects such as previous exposure to robots, the age and gender of the person interacting with robots (Broadbent et al., 2009; Kuo et al., 2009; Nomura, 2017; but also see Bishop et al., 2019; for a review, see Naneva et al., 2020). While it is generally observed that increased exposure to social robots corresponds to more favorable attitudes toward them, the evidence regarding age and gender as factors influencing acceptance is inconclusive. Previous studies suggested that older individuals and females tend to have less positive attitudes toward robots (Kuo et al., 2009; May et al., 2017). However, a systematic review (Naneva et al., 2020) contradicted this conclusion. According to this analysis, age and gender do not appear to have a significant impact on acceptance of social robots. Additionally, personality features might also play a role. According to Naneva et al. (2020), there is a positive correlation between acceptance of robots and the personality traits of agreeableness, extroversion, and openness, while conscientiousness and neuroticism do not appear to have any significant impact (Esterwood et al., 2021).

      Apart from some human-related factors discussed above that could impact robot acceptance, many other factors that potentially influence human-robot interaction outcome concern the robot itself, including the purpose it is used for and its appearance. Whereas multiple studies demonstrated that users prefer human-like robots (Esposito et al., 2019; 2020), the systematic review by Naneva et al. (2020) could not find clear evidence for that. Here, we focus on some robot-related factors, in particular its voice, to motivate a novel research question, as will be reviewed in the next few paragraphs.

      1.2 Anthropomorphism in robot design and its impact on interaction

      People tend to ascribe human traits to non-human entities. There are two aspects to consider. Firstly, users attribute certain human behaviors to the robot by projecting their own expectations onto it. Secondly, individuals intentionally program the robot with human behaviors. Companies provide robots with a variety of physical appearances and voices that differ in gender, age, accent, and emotional expression, to cater to a wide range of needs and preferences of their users (Epley et al., 2007). An anthropomorphic robot design enables a more natural interaction with robots because people can rely on behaviors familiar from human-human interactions (Clodic et al., 2017). Moreover, a humanoid appearance results in more positive evaluation of the robot (Biermann et al., 2020).

      1.3 Robot’s voice in trust and competence evaluation

      To have a productive interaction, humans need to have confidence in and trust a social robot (Marble et al., 2004). Trust can influence the success of human-robot collaboration and determine future robot use (Freedy et al., 2007). In human-human interactions, trust has been the subject of extensive research (Dunning and Fetchenhauer, 2011). Crucially, multiple studies have indicated that trust does not necessarily result from a logical evaluation of the probabilities of different outcomes and benefits involved in a given situation. Rather, it seems to stem from non-rational factors, such as feelings and emotions. Factors that contribute to trust are linked to the attributes of both the person, the circumstances, and their interplay (Evans and Krueger, 2009; for review see Thielmann and Hilbig, 2015). In particular, being part of the same group can heighten trust levels (Evans and Krueger, 2009).

      Trust in human-robot interaction is defined as “the attitude that an agent will help achieve an individual’s goals in a situation characterized by uncertainty and vulnerability” (Lee and See, 2004) or as “the reliance by an agent that actions prejudicial to their wellbeing will not be undertaken by influential others” (Hancock et al., 2011). These definitions imply that humans who trust a robot believe that it will not harm them or can be relied on in fulfilling tasks (Law and Scheutz, 2021).

      Although numerous factors can impact trust in artificial agents (as demonstrated by Schaefer et al., 2016; Hancock et al., 2011 in their respective meta-analyses; for systematic review see Rheu et al., 2021; Law and Scheutz, 2021), the voice of a robot is considered one of the most critical factors in determining trust specifically related to robots.

      In a questionnaire study conducted by Dautenhahn et al. (2005), most of the respondents expressed a desire for a robotic companion that can communicate in a way that is very similar to a human. Individuals also tended to get closer to a robot that had a human-like voice, in contrast to a robot with an artificially synthesized voice (Walters et al., 2008). Human-like voices were perceived as less uncanny and rated higher in terms of qualities such as sympathy, credibility, and trustworthiness (K. Kühne et al., 2020). Robots that had human-like voices were considered to be more efficient and were remembered more easily (Rodero, 2017). Finally, artificial agents with a human-like voice were perceived as more competent and credible (Sims et al., 2009; Fischer, 2021; Kim et al., 2022).

      Competence is another attribute that is often intuitively assessed in everyday interactions (Kovarsky et al., 2013; Abele et al., 2021). The Behavioral Regulation Model defines confidence as the likelihood of task achievement (Ellemers et al., 2013). Alongside warmth, confidence underlies social evaluation and relies on such features as power, status, and resources (Rosenberg et al., 1968). In human-robot interaction, competence was one of the most important predictors of human preferences between different robot behaviors (Oliveira et al., 2019; Scheunemann et al., 2020). Also in evaluating competence, human-likeness in the robots’ appearance played a major role (Goetz et al., 2003; Kunold et al., 2023).

      It is important to note that there is a significant association between competence and trust (Hancock et al., 2011; Kraus et al., 2018; Steain et al., 2019; Christoforakos et al., 2021). Individuals have greater trust in a robot when they perceive it to be more competent.

      1.4 The uncanny valley phenomenon and its relation to a robot voice

      One caveat in robot design is that incorporating too much human-likeness may result in the uncanny valley phenomenon. As shown by Mori (1970), the level of robot acceptance drops and a sense of eeriness or discomfort arises, once a certain level of human-like visual resemblance has been reached. Although there is currently no evidence of an uncanny valley for robotic voices (K. Kühne et al., 2020), it is premature to completely dismiss or exclude this possibility.

      Assigning gender to a robot through appearance and voice can enhance its human-like qualities and influence its acceptance. For example, a female-sounding robot speaking in a higher tone received higher ratings for attractiveness and social competence (Niculescu et al., 2011; 2013). However, this effect can be influenced by the gender of the participants: Participants of the same gender as the robot’s given gender identify themselves more with the robot and feel closer to it (Eyssel et al., 2012). The process at work here is a tendency to favor those within one’s own group (in-group-bias; Tajfel and Forgas, 2000), which may extend to other facets of communication, such as a particular way of speaking or adopting regional language variations (Delia, 1975).

      Another way to enhance the human-likeness of a robot’s voice is by incorporating an emotional tone or a particular dialect. Thus, robots with an emotional voice were found to be more likable (James et al., 2018). Researchers added a Scottish accent to Harmony, a customizable personal companion agent, in order to enhance her likability and charm (Coursey et al., 2019). Nevertheless, imparting a human dialect to a mechanically looking robot bears a risk of creating an uncanny valley effect (Mitchell et al., 2011). Therefore, we briefly review what is known about this mechanism of influence.

      1.5 The impact of dialect-related social classifications and group identity

      Interestingly, dialect-related social classifications and the sense of being part of a group based on accent or dialect are more robust than those resulting from gender or ethnicity (Kinzler et al., 2010). A dialect or accent refers to how individuals from diverse regions or social groups articulate words and phrases, leading to differences in their accent and speech patterns. While dialects and accents are interconnected, they are not identical. Dialects encompass a wider range of linguistic aspects, including vocabulary, grammar, and sentence structure, whereas accents primarily involve differences in pronunciation (Sikorski, 2005; for more detailed information on the topic of accent and dialect, see Planchenault and Poljak, 2021).

      Evidence of the influence of dialect on the trust or competence of a robot is mixed. In general, according to the similarity-attraction theory, individuals tend to prefer artificial agents similar to themselves, for example, in terms of personality (Nass and Lee, 2000). However, similarity on a more superficial level, such as gender, was not found to predict trust (You and Robert, 2018).

      In addition to identifying the speaker as a member of a particular geographical or national group, a dialect can also elicit favorable or unfavorable connotations and shape opinions about the speaker irrespective of the own group (H. Bishop et al., 2005). Listeners are sensitive to sociolinguistic information conveyed by a dialect or an accent. The standard language is typically viewed as prestigious and reliable, whereas regional accents tend to be regarded more unfavorably (H. Bishop et al., 2005; Tamagawa et al., 2011), so-called “accentism” (Foster and Stuart-Smith, 2023). However, certain languages may also have esteemed regional variations or dialects (H. Bishop et al., 2005).

      Prejudices against dialects and their speakers cannot be ignored, as evaluations of dialects are often associated with evaluations of the corresponding population (Wiese, 2012). A meta-analysis by Fuertes et al. (2012) revealed that a spoken dialect is perceived as a sign of lower intelligence and social class. According to Wiese (2012), individuals who do not use the standard language are often viewed as linguistically incompetent. Furthermore, Fuertes et al. (2012) found that a spoken dialect can lower the perception of competence in general.

      There are conflicting findings regarding the effects of different dialects on the perception of robots. On the one hand, imparting the standard language to a robot was shown to increase its trustworthiness and competence (Torre and Maguer, 2020). As an example, only around 4% of Torre and Maguer’s (2020) participants wanted the robot to have the same accent as they had, whereas 37% preferred a robot speaking the Standard Southern British English. Similar findings were obtained by Andrist et al. (2015): More native Arabic speakers complied with the robots who were speaking standard Arabic. For the dialect-speaking robot, the compliance depended on other factors. Namely, robots speaking with both high knowledge and high rhetorical ability were complied with more. Another study found that a synthetic agent with Austrian standard accent was perceived as possessing higher levels of education, trustworthiness, competence, politeness, and seriousness (Krenn et al., 2017).

      On the other hand, robots speaking a dialect, in this case, Franconian, were rated as more competent (Lugrin et al., 2020). Unlike in Torre and Maguer (2020), the evaluation of competence depended on the participants’ own performance in the dialect. Those who spoke in dialect more frequently rated the dialect-speaking robot as more competent. In contrast to that, V. Kühne et al. (2013) found that participants liked a dialect-speaking robot more, irrespective of their own dialect performance. In the same vein, a robot was accepted in Norwegian hospitals more when it spoke the Trøndersk dialect (Søraa and Fostervold, 2021). This preference could have been impacted by the comfortable and pleasant connotation conveyed by the Trøndersk dialect. To embrace these discrepancies, there is currently an ongoing project to develop an optimal language or accent for an artificial agent to speak (Foster and Stuart-Smith, 2023).

      In summary, standard language-speaking robots were perceived as more trustworthy or likable presumably due to the in-group bias and accentism, while according to other studies, participants preferred robots that spoke with a dialect. However, the preference for dialect-speaking robots was often influenced by human-related factors, namely, the participants’ proficiency or performance in that dialect (Lugrin et al., 2020).

      Most of the research on the utilization of dialect in robots has been conducted in Anglo-Saxon countries (Früh and Gasser, 2018). As for German-speaking countries, V. Kühne et al. (2013) found that a Rhine-Ruhr dialect-speaking virtual robot was perceived as more likable. Another study by Früh and Gasser (2018) also reports more positive attitudes toward a dialect-speaking care robot Lio in Switzerland. Importantly, in Switzerland, a dialect serves strongly as a means of social demarcation. However, a most recent study with a service robot Pepper in a hotel context showed that using the local dialect did not affect robot acceptance and attitudes (Steinhaeusser et al., 2022). The study was conducted online and participants speaking the Flanconian dialect vs. standard German were randomly assigned to the dialect or standard language conditions. While there was a non-significant tendency for individuals who spoke a dialect to have a more negative attitude toward a robot that used that same dialect, this could potentially be attributed to the use of Pepper’s text-to-speech plugin to synthesize the dialect and accent. People with a local accent may have been more likely to notice any mistakes or errors in the robot’s synthesized speech, which could in turn have influenced their attitudes towards it.

      To address the inconsistencies reviewed above, we conducted an online study among Berlin and Brandenburg residents in order to investigate the relationship between the participants’ proficiency and performance in the Berlin dialect and their trust in a robot, and the robot’s competence evaluation.

      1.6 The present study

      From 1500 onwards, the Berlin dialect emerged as a unique local language variety, replacing Low German in the region. The Berlin dialect is associated with the working class and often portrayed as a proletarian language by media figures who depict it as a dialect spoken by simple, but likable people. Additionally, the Berlin dialect is intentionally employed as a stylistic choice to establish a sense of closeness with a specific audience, as observed in its written representation in daily newspapers (Wiese, 2012). Specific features of the Berlin dialect can be found in Stickel (1997).

      Dialect proficiency means the self-evaluated ability to speak the dialect, whereas dialect performance denotes the frequency with which the participants speak the dialect. We formulated six hypotheses for our study. The first hypothesis was that the standard German-speaking robot would be trusted more and evaluated as more competent than the dialect-speaking robot (H1). The next two hypotheses posited that participants with (H2) higher dialect proficiency and (H3) higher dialect performance would trust the robot more than those with lower dialect proficiency and performance. The fourth and fifth hypotheses were that participants with (H4) higher dialect proficiency and (H5) higher dialect performance would evaluate the robot’s competence higher than those with lower dialect proficiency and performance. Finally, we expected that the robot’s competence would predict the trust ratings (H6). Hypotheses H2—H6 were tested independently for the dialect-speaking robot (H2a, H3a, H4a, H5a, H6a) and the standard German-speaking robot (H2b, H3b, H4b, H5b, H6b) as alternatives. We tested these formulated hypotheses in an online experiment with German-speaking participants using the NAO robot.

      2 Materials and methods 2.1 Participants and procedure

      The experiment was programmed and run using the online Gorilla Experiment Builder research platform (Anwyl-Irvine et al., 2020) and lasted approximately 30 min. The participants were recruited via the subject pool system SONA at the University of Potsdam. All the participants submitted their informed consent at the beginning of the experiment by clicking the corresponding checkbox and were reimbursed with course credits for their participation. They were instructed to first watch a video and then answer the survey questions honestly and spontaneously. The type of the video (Berlin dialect or Standard German) was counterbalanced between participants. After the survey, the participants were asked to fill in a demographic questionnaire, including questions about their age, gender, native language, dialect proficiency, dialect performance, and duration of residence in Berlin. Finally, participants were debriefed and given a link to enter their internal subject pool ID for receiving a credit.

      The study was conducted in accordance with the guidelines laid down in the Declaration of Helsinki and in compliance with the ethics policy of the University of Potsdam. No explicit approval was needed because the methods were standard. There were no known risks and participants gave their informed consent. The study and the procedure were already evaluated by professional psychologists to be consistent with the ethical standards of the German Research Foundation, including written informed consent and confidentiality of data as well as personal conduct.

      An a priori power analysis was conducted using G*Power (Faul et al., 2007) to determine the minimum sample size required to test the study hypothesis. Results indicated the required sample size to achieve 80% power for detecting a medium effect, at a significance criterion of α = .05, was N = 68 per robot group for linear regression with two predictors (N = 136 in total).

      2.2 Stimuli materials

      We used a video lasting 31 s, showcasing the humanoid robot NAO (Aldebaran—SAS) 1 . In the video, the robot was positioned on a table and was in motion while providing details about a painting situated in the top right portion of the wall. The painting was pixelated to avoid copyright infringement. A snapshot from the video is depicted in Figure 1.

      Screenshot of the Video Footage used Note: The artwork was pixelated in the videos to protect copyright. It is the painting Girl with a Mandolin by Pablo Picasso (1910).

      The robot in the video used a male human voice to speak. The speech was recorded twice by the same speaker—once in standard German and once in the Berlin dialect. The transcription can be found in Supplementary Materials.

      We opted to use a human voice based on earlier studies, which indicated that people prefer less robotic-sounding voices as they feel more at ease while listening to them (Dong et al., 2020; K; Kühne et al., 2020). Natural human voices are generally perceived as more trustworthy and competent compared to synthetic voices (Craig and Schroeder, 2017; Kühne et al., 2020; Sims et al., 2009). Moreover, listening to a synthetic voice can increase one’s cognitive load (Francis and Nusbaum, 2009; Simantiraki et al., 2018) which, in its turn, can lead to trust misplacement (Duffy and Smith, 2014).

      We selected a male voice because research suggests that NAO is more commonly associated with a male voice (Behrens et al., 2018). The stimuli can be found at: https://osf.io/pfqg6/.

      2.3 Measures 2.3.1 Independent variables 2.3.1.1 Demographic factors

      The following demographic factors were measured: age, gender, native language, and duration of residence in Berlin (in years).

      2.3.1.2 Dialect proficiency

      The dialect proficiency was measured using a single item: “How well can you speak the Berlin dialect?”. The answers were given on a seven-point Likert scale from 1 (Not at all) to 7 (Very well).

      2.3.1.3 Dialect performance

      The dialect performance was measured using a single item: “In everyday life, I usually speak the Berlin dialect”. The answers were given on a seven-point Likert scale from 1 (Does not apply at all) to 7 (Applies totally).

      2.3.1.4 Device type

      Device type was automatically measured by the experiment system as “mobile”, “tablet”, or “computer”.

      2.3.2 Dependent variables 2.3.2.1 Trust

      We used the Scale of Trust in Automated Systems (Jian et al., 2000) to access the level of trust participants had toward the robot featured in the video. The scale consists of 12 items, measured on a seven-point Likert scale from 1 (Do not agree at all) to 7 (Fully agree), and was specifically designed to measure trust towards automated systems, such as robots. To suit the study’s German setting, the items were translated into German, and the word “system” in each item was replaced with “robot” to better relate to the robot shown in the video. Sample items were: “I can trust the robot” („Ich kann dem Roboter vertrauen”); “The robot is dependable” (“Der Roboter ist verlässlich”). Supplementary Table S1 displays the original items and their corresponding German translations. Additionally, an extra attention-testing item was added to the scale, which instructed participants to choose response option 7 (Fully agree) as their response.

      2.3.2.2 Competence

      We used the Robotic Social Attribute Scale (RoSAS) (Carpinella et al., 2017) to measure the competence evaluation of the featured robot. The scale consists of 6 items, measured on a seven-point Likert scale from 1 (Do not agree at all) to 7 (Fully agree). Sample items were: “The robot is interactive” (“Der Roboter ist interaktiv”); “The robot is knowledgeable” (“Der Roboter ist sachkundig”). Supplementary Table S2 displays the original items and their corresponding German translations. Additionally, an extra attention-testing item was added to the scale, which instructed participants to choose the response option 1 (“Do not agree at all”) as their response.

      2.4 Sample and data pre-processing

      One hundred and thirty-seven participants (94 females, 41 males, 2 non-binary), Mean age = 33 years, SD = 14 years) took part in the experiment. Eight participants were excluded from the analysis because their native language was not German. Nine participants were further excluded from the analysis because they failed the attention test items in both scales. This yielded the ultimate sample size N = 120 (Mean age = 32 years, SD = 12 years; 81 female, 38 male, 1 non-binary). Additionally, data from the trust items of two participants and data from the competence items of three participants were excluded because they failed the attention test items in the respective scale. The remaining data of these five participants was still used.

      Data preparation and analyses were done using Microsoft® Excel® for Microsoft 365 and SPSS Version v.29 software package. Figures were built in R (R Core Team, 2020). The normality of the data distribution was confirmed using a Kolomogorov-Smirnov test. Before conducting the multiple regression analysis, the distributional assumptions for the multiple regression were assessed 2 . The regression analysis treated the gender category of “non-binary” as missing data.

      3 Analysis and results 3.1 Trust

      First, we employed a two-tailed independent samples t-test to examine the level of trust between the dialect-speaking robot and the standard German-speaking robot in all participants. Even though there was a minor trend in favor of trusting the standard German-speaking robot more (M = 4.716, SD = 1.259) than the dialect-speaking one (M = 4.591, SD = 1.056), this difference was not statistically significant (t (116) = −0.583, p = .561). Thus, we failed to confirm H1a. Participants did not trust the standard German-speaking robot significantly more than the dialect-speaking robot.

      To examine if participants with higher dialect proficiency would trust the dialect-speaking robot more than those with lower dialect proficiency, we conducted a multiple regression analysis, using the enter method. In the first step, we added only dialect proficiency as predictor. In the second step, we added control variables: age, gender, duration of residence in Berlin, and device type. In line with the H2a hypothesis, only dialect proficiency explained a significant amount of the variance in the value of trust in the dialect-speaking robot (β = .272, t (60) = 2.189, p < .05, F (1, 60) = 4.792, R 2 = .074, R 2 Adjusted = .059). The dialect-speaking robot was more trusted by participants who were more proficient in the Berlin dialect.

      We conducted another multiple regression analysis to see if participants with higher dialect performance would trust the dialect-speaking robot more than those with lower dialect performance. Again, in the first step, we added only dialect performance as predictor. In the second step, we added control variables: age, gender, duration of residence in Berlin, and device type. Contrary to the H3a hypothesis, dialect performance was not a significant predictor of trust in the dialect-speaking robot (β = .208, t (60) = 1.646, p = .105, F (1, 60) = 2.711, R 2 = .043, R 2 Adjusted = .027). Neither of the control variables contributed to the variance of trust neither.

      In summary, for the dialect-speaking robot, only dialect proficiency was a significant predictor of trust. We confirmed H2a and failed to confirm H3a.

      Further, we conducted a multiple regression analysis to test if participants with higher dialect proficiency would trust the standard German-speaking robot more than those with lower dialect proficiency. Again, using the enter method, in the first step, we added only dialect proficiency as predictor. In the second step, we added control variables: age, gender, duration of residence in Berlin, and device type.

      Contrary to the H2b hypothesis, dialect proficiency did not explain the value of trust in the standard-speaking robot (β = .086, t (53) = 0.628, p = .533, F (1, 53) = 0.394, R 2 = .007, R 2 Adjusted = −.011). However, age, gender, duration of residence in Berlin, and device type were significant predictors of trust. The standard German-speaking robot was more trusted by individuals who were older, female, had a shorter duration of residence in Berlin, and used a computer device for watching the experimental videos.

      Finally, we conducted another multiple regression to examine if participants with higher dialect performance would trust the standard German-speaking robot more than those with lower dialect performance. In the first step, we added only dialect performance as predictor. In the second step, we added control variables: age, gender, duration of residence in Berlin, and device type. Contrary to the H3b hypothesis, dialect performance was not a significant predictor of trust in the standard-speaking robot (β = .043, t (53) = 0.312, p = .757, F (1, 53) = 0.097, R 2 = .002, R 2 Adjusted = −.017).

      In summary, for the standard German-speaking robot, age, gender, duration of residence in Berlin, and device type were significant predictors of trust, when together in model with dialect proficiency. We found no evidence for H2b and H3b.

      The results are summarized in Table 1 and Table 2.

      Results of the Regression Analysis on the Outcome Variable Trust with Dialect Proficiency as Predictor.

      Dialect-speaking robot Standard German-speaking robot
      Model β SE t p β SE t p
      1 Constant 0.227 18.401 <.001 0.315 14.457 <.001
      Proficiency .272 0.061 2.189 < .05 .086 0.079 0.628 .533
      R 2 .074 .007
      R 2 Adjusted .059 −.011
      p <.05 .533
      2 Constant 0.532 7.913 <.001 0.561 7.626 <.001
      Proficiency .345 0.094 1.824 .074 .308 0.101 1.768 .083
      Age .174 0.013 1.210 .231 .426 0.015 2.916 <.05
      Gender −.144 0.306 −1.125 .265 −.319 0.325 −2.528 <.05
      Duration −.179 0.078 −0.917 .363 −.471 0.091 −2.537 <.05
      Device .082 0.266 0.645 .522 .428 0.316 3.412 <.001
      R 2 .119 .325
      R 2 Adjusted .040 .356
      p .200 <.001

      Note: Dialect-speaking robot N = 63. Standard-speaking robot N = 57.

      Method: enter. Significant results are marked in bold.

      Results of the Regression Analysis on the Outcome Variable Trust with Dialect Performance as Predictor.

      Dialect-speaking robot Standard German-speaking robot
      Model β SE t p β SE t p
      1 Constant 0.206 21.069 <.001 0.272 17.127 <.001
      Performance .208 0.090 1.646 .105 .043 0.095 0.312 .757
      R 2 .043 .002
      R 2 Adjusted .027 −.017
      p .105 .757
      2 Constant 0.544 7.742 <,001 0.563 7.613 <.001
      Performance .142 0.108 0.933 .355 .259 0.107 1.683 .099
      Age .174 0.014 1.170 .247 .430 0.016 2.924 <.05
      Gender −.129 0.313 −0.984 .329 −.313 0.324 −2.480 <.05
      Duration .004 0.064 0.025 .980 −.420 0.084 −2.462 <.05
      Device .074 0.271 0.568 .572 .475 0.311 3.844 <.001
      R 2 .081 .321
      R 2 Adjusted −.001 .252
      p .434 <.05

      Note: Dialect-speaking robot N = 63. Standard-speaking robot N = 57.

      Significant results are marked in bold.

      Figure 2 presents a visual summary of the outcomes obtained from regression analyses that assessed how dialect proficiency predicted trust in both the standard German-speaking and dialect-speaking robot.

      Regression Analysis for Dialect Proficiency as a Predictor of Trust in the Standard German-speaking and the Dialect-speaking Robot Note: The orange solid line represents the regression slope for the dialect-speaking robot. The dark blue long dashed line represents the regression slope for the standard German-speaking robot.

      3.2 Competence

      Again, we used a two-tailed independent samples t-test to examine the level of competence between the dialect-speaking robot and the standard German-speaking robot in all participants. The findings were similar for the evaluation of trust. While there was a descriptive tendency to rate the standard German-speaking robot as more competent (M = 3.831, SD = 0.947) than the dialect-speaking robot (M = 3.777, SD = 0.999), the difference was not statistically significant (t (115) = −0.303, p = .763). Thus, we failed to confirm H1b. Participants did not evaluate the standard German-speaking robot as significantly more competent than the dialect-speaking robot.

      To examine if participants with higher dialect proficiency would evaluate the dialect-speaking robot as more competent than those with lower dialect proficiency, we again conducted a multiple regression using the enter method. In the first step, we added only dialect proficiency as predictor. In the second step, we added control variables: age, gender, duration of residence in Berlin, and device type. Contrary to the H4a hypothesis, dialect proficiency was not a significant predictor of competence in the dialect-speaking robot (β = .047, t (60) = 0.363, p = .718, F (1, 60) = 0.131, R 2 = .002, R 2 Adjusted = −.014).

      To examine if participants with higher dialect performance would evaluate the dialect-speaking robot as more competent than those with lower dialect performance, we again conducted a multiple regression using the enter method. In the first step, we added only dialect performance as predictor. In the second step, we added control variables: age, gender, duration of residence in Berlin, and device type. Again, counter to the H5a hypothesis, dialect performance was not a significant predictor of competence in the dialect-speaking robot (β = −.002, t (60) = −0.019, p = .985, F (1, 60) = 0.000, R 2 = .000, R 2 Adjusted = −.017).

      Neither of the control variables contributed to the variance of competence.

      In summary, for the dialect-speaking robot, neither dialect proficiency nor dialect performance, or any control variable was significant predictor of competence. We found no evidence for H4a and H5a.

      Further, to examine if participants with higher dialect proficiency would evaluate the standard German-speaking robot as more competent than those with lower dialect proficiency, we conducted a multiple regression using the enter method. In the first step, we added only dialect proficiency as predictor. In the second step, we added control variables: age, gender, duration of residence in Berlin, and device type. Contrary to the H4b hypothesis, dialect proficiency alone was not a significant predictor of competence in the standard-speaking robot (β = .086, t (52) = 0.623, p = .536, F (1, 52) = 0.389, R 2 = .007, R 2 Adjusted = −.012). However, when controlled for age, gender, duration of residence in Berlin, and device type, it did explain a reliable amount of variance in the value of competence, together with duration of residence in Berlin ( β = .695, t (48) = 3.463, and β = −.824, t (48) = −3.735, respectively, p < .001, F (5, 48) = 4.634, R 2 = .326, R 2 Adjusted = .255). age, gender, and device type did not contribut to the final model.

      To see if participants with higher dialect performance would evaluate the standard German-speaking robot as more competent than those with lower dialect performance, we conducted a multiple regression using the enter method. In the first step, we added only dialect performance as predictor. In the second step, we added control variables: age, gender, duration of residence in Berlin, and device type.

      Contrary to the hypothesis H5b, dialect performance alone was not a significant predictor of competence in the standard German-speaking robot (β = .051, t (52) = 0.365, p = .717, F (1, 52) = 0.133, R 2 = .003, R 2 Adjusted = −.017). However, when controlled for age, gender, duration of residence in Berlin, and device type, it did explain a reliable amount of variance in the value of competence, together with duration of residence in Berlin and device type ( β = .410, t (48) = 2.433; β = −.529, t (48) = −2.768; and β = .281, t (48) = 2.188 respectively, p < .05, F (5, 48) = 3.193, R 2 = .250, R 2 Adjusted = .171). age and gender did not contribute to the final model.

      In summary, for the standard German-speaking robot both dialect proficiency and dialect performance were significant predictors of competence, but only when controlled for age, gender, duration of residence in Berlin, and device type. Hypotheses H4b and H5b could be partially confirmed. Duration of residence in Berlin and device type were also reliable predictors of competence for the standard German-speaking robot.

      The results are summarized in Table 3 and Table 4.

      Results of the Regression Analysis on the Outcome Variable Competence with Dialect Proficiency as Predictor.

      Dialect-speaking robot Standard German-speaking robot
      Model β SE t p β SE t p
      1 Constant 0.226 16.413 <.001 0.234 15.792 <.001
      Proficiency .047 0.061 0.363 .718 .086 0.059 0.623 .536
      R 2 .002 .007
      R 2 Adjusted −.014 −.012
      p .718 .536
      2 Constant 0.502 9.231 <.001 0.421 9.798 <.001
      Proficiency .293 0.087 1.592 .117 .695 0.086 3.463 <.001
      Age −.216 0.012 −1.489 .142 .125 0.012 0.807 .423
      Gender −.152 0.289 −1.191 .239 −.199 0.250 −1.529 .133
      Duration −.222 0.073 −1.131 .263 −.824 0.083 −3.735 <.001
      Device .079 0.253 0.623 .536 .192 0.233 1.562 .125
      R 2 .121 .326
      R 2 Adjusted .042 .255
      p .193 <.05

      Note: Dialect-speaking robot N = 63. Standard-speaking robot N = 57.

      Significant results are marked in bold.

      Results of the Regression Analysis on the Outcome Variable Competence with Dialect Performance as Predictor.

      Dialect-speaking robot Standard German-speaking robot
      Model β SE t p β SE t p
      1 Constant 0.197 19.167 <.001 0.206 18.243 <.001
      Performance −.002 0.082 −0.019 .985 .051 0.072 0.365 .717
      R 2 .000 .003
      R 2 Adjusted −.017 −.017
      p .985 .717
      2 Constant 0.509 9.190 <.001 0.444 9.389 <.001
      Performance .139 0.099 0.890 .377 .410 0.087 2.433 <.05
      Age −.246 0.012 −1.619 .111 .005 0.012 0.029 .977
      Gender −.132 0.294 −1.017 .314 −.122 0.256 −0.915 .365
      Duration −.068 0.060 −0.423 .674 −.529 0.072 −2.768 <.05
      Device .079 0.257 0.614 .542 .281 0.244 2.188 <.05
      R 2 .094 .250
      R 2 Adjusted .013 .171
      p .341 <.05

      Note: Dialect-speaking robot N = 63. Standard-speaking robot N = 57.

      Method: enter. Significant results are marked in bold.

      Figure 3 presents a visual summary of the outcomes obtained from regression analyses that assessed how dialect proficiency predicted competence in both the standard German-speaking and dialect-speaking robot.

      Regression Analysis for Dialect Proficiency as a Predictor of Competence in the Standard German-speaking and the Dialect-speaking Robot Note: The orange solid line represents the regression slope for the dialect-speaking robot. The dark blue long dashed line represents the regression slope for the standard German-speaking robot.

      3.3 Association between robot’s competence and trust

      Lastly, we sought to determine if the evaluation of a robot’s competence could predict the degree of trust that was placed in the robot. Indeed, for both the dialect-speaking robot (β = .631, t (59) = 6.249, F (1, 59) = 39.049, p < .001, R 2 = .398, R 2 Adjusted = .388) and the standard German-speaking robot (β = .646, t (52) = 6.096, F (1, 52) = 37.164, p < .001, R 2 = .417, R 2 Adjusted = .406), competence was a significant predictor of trust. Both H6a and H6b could be confirmed. Figure 4 presents a visual representation of the outcomes of the regression analyses.

      Regression Analysis of Competence as a Predictor of Trust Note: The orange solid line represents the regression slope for the dialect-speaking robot. The dark blue long dashed line represents the regression slope for the standard German-speaking robot.

      The data set and the analysis script can be found at: https://osf.io/pfqg6/.

      4 Discussion 4.1 Proficiency and performance in the Berlin dialect and evaluation of competence and trust

      Our study investigated verbal aspects of human robot interaction quality. Specifically, we examined the association between participants’ proficiency and performance in the Berlin dialect and their evaluation of competence and trust in a NAO robot that spoke either with or without this dialect. The study was conducted online, and dialect proficiency was defined as the self-evaluated ability to speak the Berlin dialect, while dialect performance referred to the frequency of dialect used by the participants.

      In general, although the difference in trust and competence ratings were not significant, our findings tend to be consistent with previous studies conducted by Torre and Maguer (2020) and Andrist et al. (2015) which also found that people preferred a robot that speaks in standard language. This is in line with the overall research suggesting that individuals who speak the standard language are perceived as more competent (Fuertes et al., 2012). However, our findings are contradictory to the results of V. Kühne et al. (2013) and Früh and Gasser (2018) where a robot speaking in dialect was viewed more positively. It is essential that their experiments were conducted in Switzerland, as the local dialect plays a crucial role, in distinguishing insiders from outsiders. Further, similar to Lugrin et al. (2020) we demonstrated that participants’ ratings of the robot’s trust and competence were influenced by their own proficiency in the dialect, but our study provided more nuanced results.

      Importantly, as expected, the competence of the robot significantly predicted trust. Namely, the more competent the robot was rated by the participants, the more they trusted it. This is in line with previous research (Hancock et al., 2011; Kraus et al., 2018; Steain et al., 2019; Christoforakos et al., 2021). Competence is perceived as an ability to carry out behavioral intentions (Kulms and Kopp, 2018). Being a positive quality, it creates a more favorable impression of the trustee. As a major dimension of social cognition postulated by the Stereotype Content Model, competence has been observed to foster the establishment of trust in interactions between humans (Fiske et al., 2007). Also according to another model, competence and benevolence of the trustee are positively related to trust (Mayer et al., 1995). Thus, we report evidence indicating that social mechanisms observed in human-human interactions can be transferred to human-robot interactions.

      In the following paragraphs we will discuss the findings in detail. In the first place, although there was a slight trend of higher trust and competence evaluation for the standard German-speaking compared to the dialect-speaking robot for all participants, the difference was not statistically significant. The standard German-speaking robot and the dialect-speaking robot received largely comparable ratings in terms of both competence and trustworthiness.

      Nevertheless, there were systematic differences in ratings between the two robots. Consider first the ratings obtained for the dialect-speaking robot. For the dialect-speaking robot, only dialect proficiency was a significant predictor of trust, with individuals who considered themselves more proficient in speaking the Berlin dialect having higher levels of trust. The other predictors (dialect performance, age, gender, duration of residence, and device type) did not have a significant contribution to the final statistical model of the ratings on trust. Our analysis for the outcome variable competence showed no significant predictors. Dialect proficiency, dialect performance, age, gender, duration of residence, and device type did not significantly contribute to the final model of participants’ rating. Thus, for the dialect-speaking robot, only one reliable association was found, namely, that between dialect proficiency and the trust in robots. The more proficient the participants were in the Berlin dialect, the more they trusted the dialect-speaking NAO, exactly in the sense of the similarity-attraction theory (Nass and Lee, 2000). None of the factors were found to be predictive of the level of robot’s competence.

      For the standard German-speaking robot, the findings were more complex. We found that the final model included age, gender, duration of residence, and device type as significant predictors of trust, but only when included into the model together with dialect proficiency. Individuals who were older, female, had a shorter duration of residence in Berlin, and used a computer device for watching the experimental videos were found to trust the standard German-speaking robot more. Dialect performance did not make a significant contribution to the model.

      Finally, dialect proficiency, dialect performance, duration of residence, and device type were significant predictors of competence, indicating that those who were more proficient in speaking the Berlin dialect, spoke it more often, had a shorter duration of residence in Berlin, and used a computer device for watching the experimental videos found the standard German-speaking robot more competent.

      For the standard German-speaking robot, general factors such as age and gender appeared to be predictive of the trust level, while the participants’ dialect proficiency and performance only played a role in the evaluation of competence. This finding collaborates with earlier research reporting the importance of demographic factors on robot’s perception (Naneva et al., 2020). Similar to results obtained by K. Kühne et al. (2020), female participants evaluated the robot as more trustworthy. In comparison to that research, however, we found that, as participants’ age increased, their trust in the standard German-speaking robot also increased. In conclusion, again following the principles of the similarity-attraction theory (Nass and Lee, 2000), participants who had been living in Berlin for a shorter period, presumably were less likely to be influenced by the Berlin dialect, were more likely to trust the robot that spoke in standard German and found it more competent.

      It is noteworthy that not dialect performance as a relatively objective and quantitative measure of a dialect usage but dialect proficiency, a subjective and qualitative evaluation of one’s dialect mastery, predicted the robot’s perceived trustworthiness. The ability to speak a dialect can be integral to one’s self-image and contribute to the identification of oneself with a particular group or set of qualities. According to recent research, it is so-called self-essentialist reasoning, that is beliefs about the essence of one’ self, that underly the similarity-attraction effect (Chu et al., 2019). This reasoning focuses more on what one is and not on what one does; it is a personal characteristic that tends to be stable rather than situational or temporary in nature.

      On a side note, participants who watched the video on a PC rated the standard German-speaking robot as more trustworthy and more competent, compared to participants working on a tablet or a mobile phone. This result indicates that, when examining human-robot interaction through video or audio stimuli, it is important to consider and control for the experimental device used. Possible reasons for the observed difference include different testing situations, such as doing the experiment at home on a PC or “on the go” on a mobile phone, which could have resulted in different distractions and response criteria, or differences in information processing on different screens (cf. Sweeney and Crestani, 2006; Wickens and Carswell, 2021). These factors could have potentially led to increased cognitive load on smaller screens and, consequently, to trust misplacement (Duffy and Smith, 2014).

      4.2 Limitations of the study

      It is worth noting that various intervening factors could have influenced our study. First, choosing a male voice might have affected the overall outcomes. Unlike in human-human interactions (Bonein and Serra, 2009; Slonim and Guillen, 2010), prior studies have shown that virtual assistants or robots with a male voice are generally viewed as more competent (Powers and Kiesler, 2006; Ernst and Herm-Stapelberg, 2020) and trustworthy (Behrens et al., 2018), although these ratings can be context-dependent (Andrist et al., 2015; Kraus et al., 2018). On the contrary, other recent research indicates that a female voice agent may be viewed as more likable, competent, or intelligent (Vega et al., 2019; Dong et al., 2020). ‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬

      Second, due to social identification, people tend to rate voices of the same gender as more trustworthy (Crowelly et al., 2009) and perceive more psychological closeness to them (Eyssel et al., 2012). However, our research did not find evidence for this when using male voice stimuli exclusively. To resolve these contradictory results, more studies utilizing both male and female voices are necessary.‬‬‬‬‬‬‬

      Third, dialects carry distinct connotations within German-speaking countries (cf. H. Bishop et al., 2005). For instance, the Berlin dialect is often associated with a lower socioeconomic class or working class (Stickel, 1997), whereas the Bavarian dialect is often viewed as more prestigious. It is even mandatory for politicians to speak the local dialect in Bavaria. In particular, the Bavarian dialect of Germany holds a significant and independent position within the conceptual framework of languages (Adler, 2019). A survey revealed that the Bavarian dialect is considered the second most appealing German dialect (29,6%), after the Northern German dialect (34,9%), while only about 7% found the Berlin dialect attractive (Gärtig et al., 2010; Adler and Plewnia, 2018). At the same time, a mere 5% of respondents found the Berlin dialect unappealing, whereas having no dialect at all was rated as unattractive by 32,6% of the participants (Gärtig et al., 2010). Thus, to obtain a more nuanced understanding, it would be beneficial to conduct a comparative study involving multiple dialects as well as add an assessment of subjective dialect connotations. Moreover, as dialects are a means of positive identification within a group and signify a sense of attachment to a particular region (Wiese, 2012), varying levels of identification may exist among different dialects. This can affect the degree of perceived similarity and subsequently influence assessments of trustworthiness and competence.

      Fourth, our study employed a video featuring NAO, a compact and intelligent-looking social robot. It remains uncertain if its appearance aligns with all the connotations linked to the Berlin dialect. Humans may link voices with robots, and a mismatch in this connection could result in diverse outcomes in their interaction (McGinn and Torre, 2019).

      Finally, we consider the limitations of our methodology for data collection and data analysis. With regard to data collection, it will be important to provide converging evidence for this internet-based study by conducting both laboratory-based and real-life research in future projects. With regard to data analysis, more advanced modeling techniques, like linear mixed modeling, can offer greater flexibility compared to stepwise regression and can usefully be employed to uncover additional effects in our data, including further variability driven by participant characteristics.

      Also, the topic of communication can influence the assessment of a robot that speaks a particular dialect. Using standard German would likely be more suitable for discussing a painting, while a dialect such as the Berlin dialect could be more appropriate for conversations about everyday events or work-related topics (topic-based shifting) (Walker, 2019).

      An overall point for future investigations is that certain scholars view trust as a construct that has multiple dimensions. For example, Law and Scheutz (2021) differentiate between performance-based trust and relation-based trust. Future research on trust should take into account these different aspects and explore their implications in various contexts. Finally, objective measures of trust, for example, following a robot’s advice or task delegation should be used to better operationalize the outcome (Law and Scheutz, 2021).

      Overall, our study provides valuable insights into how language proficiency and other demographic factors influence human-robot interaction and robot perception. Our results can inform the development of more effective robots that are tailored to meet the needs and expectations of diverse user groups. Further research is needed to explore the role of gender, age, and dialect in human-robot interaction and perception, and to identify additional factors that may influence trust and competence evaluation.

      Data availability statement

      The datasets presented in this study can be found in online repositories. The names of the repository/repositories and accession number(s) can be found below: https://osf.io/pfqg6/.

      Ethics statement

      The study was conducted in accordance with the guidelines laid down in the Declaration of Helsinki and in compliance with the ethics policy of the University of Potsdam. No explicit approval was needed because the methods were standard. There were no known risks and participants gave their informed consent. The study and the procedure were already evaluated by professional psychologists to be consistent with the ethical standards of the German Research Foundation, including written informed consent and confidentiality of data as well as personal conduct.

      Author contributions

      KK and EH contributed to the conception and design of the study. EH conceived the stimuli, programmed the survey, and conducted the study. KK and EH performed the analysis. KK wrote the first draft of the manuscript. KK, MF, OB, and YZ wrote, discussed, and revised several drafts before approving the final version. All authors contributed to the article and approved the submitted version.

      Funding

      The author(s) declare financial support was received for the research, authorship, and/or publication of this article. Funded by the German Research Foundation (DFG) - Project number 491466077.

      We would like to thank Tristan Kornher for creating and generously providing his video footage and Alexander Schank for providing his voice for the corresponding material.

      Conflict of interest

      The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

      Publisher’s note

      All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

      Supplementary material

      The Supplementary Material for this article can be found online at: /articles/10.3389/frobt.2023.1241519/full#supplementary-material

      The video material used was made and provided by Tristan Kornher, student at the University of Potsdam.

      Multicollinearity was tested and rejected using VIF values ranging from 1.023 to 3.461 (substantially below the 10 threshold). Autocorrelation was absent, shown by Durbin-Watson statistics between 1.780 and 2.400 (within the acceptable range of 1.5–2.5). Normality of residuals was checked via P-P plots of standardized residuals.

      References Abele A. E. Ellemers N. Fiske S. T. Koch A. Yzerbyt V. (2021). Navigating the social world: toward an integrated framework for evaluating self, individuals, and groups. Psychol. Rev. 128 (2), 290314. 10.1037/rev0000262 Adler A. (2019). Language discrimination in Germany: when evaluation influences objective counting. J. Lang. Discrimination 3 (2), 232253. 10.1558/jld.39952 Adler A. Plewnia A. (2018). “3. Möglichkeiten und Grenzen der quantitativen Spracheinstellungsforschung,” in Variation – normen – identitäten. Editors Lenz A. N. Plewnia A. (Berlin, Boston: De Gruyter), 6398. 10.1515/9783110538625-004 Andrist S. Ziadee M. Boukaram H. Mutlu B. Sakr M. (2015). Effects of culture on the credibility of robot speech: a comparison between English and Arabic. Proc. Tenth Annu. ACM/IEEE Int. Conf. Human-Robot Interact., 157164. 10.1145/2696454.2696464 Anwyl-Irvine A. L. Massonnié J. Flitton A. Kirkham N. Evershed J. K. (2020). Gorilla in our midst: an online behavioral experiment builder. Behav. Res. Methods 52 (1), 388407. 10.3758/s13428-019-01237-x Bartneck C. Forlizzi J. (2004). “A design-centred framework for social human-robot interaction,” in RO-MAN 2004. 13th IEEE International Workshop on Robot and Human Interactive Communication (IEEE Catalog No.04TH8759), 591594. 10.1109/ROMAN.2004.1374827 Behrens S. I. Egsvang A. K. K. Hansen M. Møllegård-Schroll A. M. (2018). “Gendered robot voices and their influence on trust,” in Companion of the 2018 ACM/IEEE International Conference on Human-Robot Interaction, 6364. 10.1145/3173386.3177009 Belpaeme T. Kennedy J. Ramachandran A. Scassellati B. Tanaka F. (2018). Social robots for education: a review. Sci. Robotics 3 (21), eaat5954. 10.1126/scirobotics.aat5954 Bendel O. (Editor) (2021). Soziale Roboter: technikwissenschaftliche, wirtschaftswissenschaftliche, philosophische, psychologische und soziologische Grundlagen (Springer Fachmedien Wiesbaden). 10.1007/978-3-658-31114-8 Biermann H. Brauner P. Ziefle M. (2020). How context and design shape human-robot trust and attributions. Paladyn, J. Behav. Robotics 12 (1), 7486. 10.1515/pjbr-2021-0008 Bishop H. Coupland N. Garrett P. (2005). Conceptual accent evaluation: thirty years of accent prejudice in the UK. Acta Linguist. Hafniensia 37 (1), 131154. 10.1080/03740463.2005.10416087 Bishop L. van Maris A. Dogramadzi S. Zook N. (2019). Social robots: the influence of human and robot characteristics on acceptance. Paladyn, J. Behav. Robotics 10 (1), 346358. 10.1515/pjbr-2019-0028 Bonein A. Serra D. (2009). Gender pairing bias in trustworthiness. J. Socio-Economics 38 (5), 779789. 10.1016/j.socec.2009.03.003 Breazeal C. (2017). “Social Robots: from research to commercialization,” in Proceedings of the 2017 ACM/IEEE International Conference on Human-Robot Interaction, 1. 10.1145/2909824.3020258 Broadbent E. (2017). Interactions with robots: the truths we reveal about ourselves. Annu. Rev. Psychol. 68 (1), 627652. 10.1146/annurev-psych-010416-043958 Broadbent E. Stafford R. MacDonald B. (2009). Acceptance of healthcare robots for the older population: review and future directions. Int. J. Soc. Robotics 1 (4), 319330. 10.1007/s12369-009-0030-6 Carpinella C. M. Wyman A. B. Perez M. A. Stroessner S. J. (2017). “The robotic social attributes scale (RoSAS): development and validation,” in Proceedings of the 2017 ACM/IEEE International Conference on Human-Robot Interaction, 254262. 10.1145/2909824.3020208 Christoforakos L. Gallucci A. Surmava-Große T. Ullrich D. Diefenbach S. (2021). Can robots earn our trust the same way humans do? A systematic exploration of competence, warmth, and anthropomorphism as determinants of trust development in HRI. Front. Robotics AI 8, 640444. 10.3389/frobt.2021.640444 Chu L. Chen H.-W. Cheng P.-Y. Ho P. Weng I.-T. Yang P.-L. (2019). Identifying features that enhance older adults’ acceptance of robots: a mixed methods study. Gerontology 65 (4), 441450. 10.1159/000494881 Cifuentes C. A. Pinto M. J. Céspedes N. Múnera M. (2020). Social robots in therapy and care. Curr. Robot. Rep. 1 (3), 5974. 10.1007/s43154-020-00009-2 Clodic A. Pacherie E. Alami R. Chatila R. (2017). “Key elements for human-robot joint action,” in Sociality and normativity for robots. Editors Hakli R. Seibt J. (Springer International Publishing), 159177. 10.1007/978-3-319-53133-5_8 Coursey K. Pirzchalski S. McMullen M. Lindroth G. Furuushi Y. (2019). “Living with Harmony: a personal companion system by RealbotixTM ,” in AI love you. Editors Zhou Y. Fischer M. H. (Springer International Publishing), 7795. 10.1007/978-3-030-19734-6_4 Craig S. D. Schroeder N. L. (2017). Reconsidering the voice effect when learning from a virtual human. Comput. Educ. 114, 193205. 10.1016/j.compedu.2017.07.003 Crowelly C. R. Villanoy M. Scheutzz M. Schermerhornz P. (2009). “Gendered voice and robot entities: perceptions and reactions of male and female subjects,” in 2009 IEEE/RSJ International Conference on Intelligent Robots and Systems, 37353741. 10.1109/IROS.2009.5354204 Dautenhahn K. Woods S. Kaouri C. Walters M. L. Lee Koay K. Werry I. (2005). “What is a robot companion—friend, assistant or butler?,” in 2005 IEEE/RSJ International Conference on Intelligent Robots and Systems, 11921197. 10.1109/IROS.2005.1545189 Delia J. G. (1975). Regional dialect, message acceptance, and perceptions of the speaker. Central States Speech J. 26 (3), 188194. 10.1080/10510977509367842 Dong J. Lawson E. Olsen J. Jeon M. (2020). Female voice agents in fully autonomous vehicles are not only more likeable and comfortable, but also more competent. Proc. Hum. Factors Ergonomics Soc. Annu. Meet. 64 (1), 10331037. 10.1177/1071181320641248 Duffy S. Smith J. (2014). Cognitive load in the multi-player prisoner’s dilemma game: are there brains in games? J. Behav. Exp. Econ. 51, 4756. 10.1016/j.socec.2014.01.006 Dunning D. Fetchenhauer D. (2011). “Understanding the psychology of trust,” in Social motivation. Editor Dunning D. (New York, NY: Psychology Press), 147169. Ellemers N. Pagliaro S. Barreto M. (2013). Morality and behavioural regulation in groups: a social identity approach. Eur. Rev. Soc. Psychol. 24 (1), 160193. 10.1080/10463283.2013.841490 Epley N. Waytz A. Cacioppo J. T. (2007). On seeing human: a three-factor theory of anthropomorphism. Psychol. Rev. 114 (4), 864886. 10.1037/0033-295X.114.4.864 Ernst C.-P. Herm-Stapelberg N. (2020). Gender stereotyping’s influence on the perceived competence of Siri and Co. Hawaii Int. Conf. Syst. Sci. 10.24251/HICSS.2020.544 Esposito A. Amorese T. Cuciniello M. Pica I. Riviello M. T. Troncone A. (2019). “Elders prefer female robots with a high degree of human likeness,” in 2019 IEEE 23rd International Symposium on Consumer Technologies (ISCT), 243246. 10.1109/ISCE.2019.8900983 Esposito A. Amorese T. Cuciniello M. Riviello M. T. Cordasco G. (2020). “How human likeness, gender and ethnicity affect elders’acceptance of assistive robots,” in 2020 IEEE International Conference on Human-Machine Systems (ICHMS), 16. 10.1109/ICHMS49158.2020.9209546 Esterwood C. Essenmacher K. Yang H. Zeng F. Robert L. P. (2021). “A meta-analysis of human personality and robot acceptance in Human-Robot Interaction,” in Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, 118. 10.1145/3411764.3445542 Evans A. M. Krueger J. I. (2009). The psychology (and economics) of trust: psychology of trust. Soc. Personality Psychol. Compass 3 (6), 10031017. 10.1111/j.1751-9004.2009.00232.x Eyssel F. Kuchenbrandt D. Bobinger S. De Ruiter L. Hegel F. (2012). ““If you sound like me, you must be more human”: on the interplay of robot and user features on human-robot acceptance and anthropomorphism,” in Proceedings of the Seventh Annual ACM/IEEE International Conference on Human-Robot Interaction, 125126. 10.1145/2157689.2157717 Faul F. Erdfelder E. Lang A.-G. Buchner A. (2007). G*Power 3: a flexible statistical power analysis program for the social, behavioral, and biomedical sciences. Behav. Res. Methods 39 (2), 175191. 10.3758/BF03193146 Fischer K. (2021). “Geräusche, Stimmen und natürliche Sprache: kommunikation mit sozialen Robotern,” in Soziale roboter. Editor Bendel O. (Springer Fachmedien Wiesbaden), 279292. 10.1007/978-3-658-31114-8_14 Fiske S. T. Cuddy A. J. C. Glick P. (2007). Universal dimensions of social cognition: warmth and competence. Trends Cognitive Sci. 11 (2), 7783. 10.1016/j.tics.2006.11.005 Foster M. E. Stuart-Smith J. (2023). “Social robotics meets sociolinguistics: investigating accent bias and social context in HRI,” in Companion of the 2023 ACM IEEE International Conference on Human-Robot Interaction, 156160. 10.1145/3568294.3580063 Francis A. L. Nusbaum H. C. (2009). Effects of intelligibility on working memory demand for speech perception. Atten. Percept. Psychophys. 71 (6), 13601374. 10.3758/APP.71.6.1360 Freedy A. DeVisser E. Weltman G. Coeyman N. (2007). “Measurement of trust in human-robot collaboration,” in 2007 International Symposium on Collaborative Technologies and Systems, 106114. 10.1109/CTS.2007.4621745 Früh M. Gasser A. (2018). “Erfahrungen aus dem Einsatz von Pflegerobotern für Menschen im Alter,” in Pflegeroboter. Editor Bendel O. (Springer Fachmedien Wiesbaden), 3762. 10.1007/978-3-658-22698-5_3 Fuertes J. N. Gottdiener W. H. Martin H. Gilbert T. C. Giles H. (2012). A meta-analysis of the effects of speakers’ accents on interpersonal evaluations: effects of speakers’ accents. Eur. J. Soc. Psychol. 42 (1), 120133. 10.1002/ejsp.862 Gärtig A.-K. Plewnia A. Adler A. (2010). Wie Menschen in Deutschland über Sprache denken: Ergebnisse einer bundesweiten Repräsentativerhebung zu aktuellen Spracheinstellungen. 1. Aufl. Mannheim: Institut für Deutsche Sprache. Goetz J. Kiesler S. Powers A. (2003). “Matching robot appearance and behavior to tasks to improve human-robot cooperation,” in Proceedings. ROMAN 2003 The 12th IEEE International Workshop on Robot and Human Interactive Communication, 5560. 10.1109/ROMAN.2003.1251796 Hancock P. A. Billings D. R. Schaefer K. E. Chen J. Y. C. De Visser E. J. Parasuraman R. (2011). A meta-analysis of factors affecting trust in Human-Robot Interaction. Hum. Factors J. Hum. Factors Ergonomics Soc. 53 (5), 517527. 10.1177/0018720811417254 Henschel A. Laban G. Cross E. S. (2021). What makes a robot social? A review of social robots from science fiction to a home or hospital near you. Curr. Robot. Rep. 2 (1), 919. 10.1007/s43154-020-00035-0 James J. Watson C. I. MacDonald B. (2018). “Artificial empathy in social robots: an analysis of emotions in speech,” in 2018 27th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), 632637. 10.1109/ROMAN.2018.8525652 Jian J.-Y. Bisantz A. M. Drury C. G. (2000). Foundations for an empirically determined scale of trust in automated systems. Int. J. Cognitive Ergonomics 4 (1), 5371. 10.1207/S15327566IJCE0401_04 Kim L. H. Domova V. Yao Y. Huang C.-M. Follmer S. Paredes P. E. (2022). Robotic presence: the effects of anthropomorphism and robot state on task performance and emotion. IEEE Robotics Automation Lett. 7 (3), 73997406. 10.1109/LRA.2022.3181726 Kinzler K. D. Shutts K. Correll J. (2010). Priorities in social categories. Eur. J. Soc. Psychol. 40 (4), 581592. 10.1002/ejsp.739 Kovarsky D. Maxwell M. Duchan J. F. (Editors) (2013). Constructing (in)competence. 0 ed. (Psychology Press). 10.4324/9780203763759 Kraus M. Kraus J. Baumann M. Minker W. (2018). Effects of gender stereotypes on trust and likability in spoken Human-Robot Interaction. http://www.lrec-conf.org/proceedings/lrec2018/pdf/824.pdf. Krenn B. Schreitter S. Neubarth F. (2017). Speak to me and I tell you who you are! A language-attitude study in a cultural-heritage application. AI Soc. 32 (1), 6577. 10.1007/s00146-014-0569-0 Kühne K. Fischer M. H. Zhou Y. (2020). The human takes it all: humanlike synthesized voices are perceived as less eerie and more likable. Evidence from a subjective ratings study. Front. Neurorobotics 14, 593732. 10.3389/fnbot.2020.593732 Kühne V. Rosenthal-von Der Pütten A. M. Krämer N. C. (2013). “Using linguistic alignment to enhance learning experience with pedagogical agents: the special case of dialect,”. Intelligent virtual agents. Editors Aylett R. Krenn B. Pelachaud C. Shimodaira H. (Springer Berlin Heidelberg), 8108, 149158. 10.1007/978-3-642-40415-3_13 Kulms P. Kopp S. (2018). A social cognition perspective on human–computer trust: the effect of perceived warmth and competence on trust in decision-making with computers. Front. Digital Humanit. 5, 14. 10.3389/fdigh.2018.00014 Kunold L. Bock N. Rosenthal-von Der Pütten A. (2023). Not all robots are evaluated equally: the impact of morphological features on robots’ assessment through capability attributions. ACM Trans. Human-Robot Interact. 12 (1), 131. 10.1145/3549532 Kuo I. H. Rabindran J. M. Broadbent E. Lee Y. I. Kerse N. Stafford R. M. Q. (2009). “Age and gender factors in user acceptance of healthcare robots,” in RO-MAN 2009 - The 18th IEEE International Symposium on Robot and Human Interactive Communication, 214219. 10.1109/ROMAN.2009.5326292 Law T. Scheutz M. (2021). “Trust: recent concepts and evaluations in human-robot interaction,” in Trust in human-robot interaction (Elsevier), 2757. 10.1016/B978-0-12-819472-0.00002-2 Lee J. D. See K. A. (2004). Trust in automation: designing for appropriate reliance. Hum. Factors J. Hum. Factors Ergonomics Soc. 46 (1), 5080. 10.1518/hfes.46.1.50_30392 Lugrin B. Strole E. Obremski D. Schwab F. Lange B. (2020). “What if it speaks like it was from the village? Effects of a robot speaking in regional language variations on users’ evaluations,” in 2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), 13151320. 10.1109/RO-MAN47096.2020.9223432 Marble J. L. Bruemmer D. J. Few D. A. Dudenhoeffer D. D. (2004). Evaluation of supervisory vs. peer-peer interaction with human-robot teams. 2004 Proceedings of The 37th Annual Hawaii International Conference on System Sciences, 9. 10.1109/HICSS.2004.1265326 May D. C. Holler K. J. Bethel C. L. Strawderman L. Carruth D. W. Usher J. M. (2017). Survey of factors for the prediction of human comfort with a non-anthropomorphic robot in public spaces. Int. J. Soc. Robotics 9 (2), 165180. 10.1007/s12369-016-0390-7 Mayer R. C. Davis J. H. Schoorman F. D. (1995). An integrative model of organizational trust. Acad. Manag. Rev. 20 (3), 709. 10.2307/258792 McGinn C. Torre I. (2019). “Can you tell the robot by the voice? An exploratory study on the role of voice in the perception of robots,” in 2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI), 211221. 10.1109/HRI.2019.8673305 Mitchell W. J. Szerszen K. A. Lu A. S. Schermerhorn P. W. Scheutz M. MacDorman K. F. (2011). A mismatch in the human realism of face and voice produces an uncanny valley. I-Perception 2 (1), 1012. 10.1068/i0415 Mori M. (1970). The uncanny valley. Energy 7, 3335. Naneva S. Sarda Gou M. Webb T. L. Prescott T. J. (2020). A systematic review of attitudes, anxiety, acceptance, and trust towards social robots. Int. J. Soc. Robotics 12 (6), 11791201. 10.1007/s12369-020-00659-4 Nass C. Lee K. M. (2000). Does computer-generated speech manifest personality? An experimental test of similarity-attraction. Proc. SIGCHI Conf. Hum. Factors Comput. Syst., 329336. 10.1145/332040.332452 Niculescu A. Van Dijk B. Nijholt A. Li H. See S. L. (2013). Making social robots more attractive: the effects of voice pitch, humor and empathy. Int. J. Soc. Robotics 5 (2), 171191. 10.1007/s12369-012-0171-x Niculescu A. Van Dijk B. Nijholt A. See S. L. (2011). “The influence of voice pitch on the evaluation of a social robot receptionist,” in 2011 International Conference on User Science and Engineering (i-USEr ), 1823. 10.1109/iUSEr.2011.6150529 Nomura T. (2017). Robots and gender. Gend. Genome 1 (1), 1826. 10.1089/gg.2016.29002.nom Oliveira R. Arriaga P. Correia F. Paiva A. (2019). “The Stereotype content model applied to human-robot interactions in groups,” in 2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI), 123132. 10.1109/HRI.2019.8673171 Planchenault G. Poljak L. (Editors) (2021). Pragmatics of accents (John Benjamins Publishing Company), 327. 10.1075/pbns.327 Powers A. Kiesler S. (2006). “The advisor robot: tracing people’s mental model from a robot’s physical attributes,” in Proceedings of the 1st ACM SIGCHI/SIGART Conference on Human-Robot Interaction, 218225. 10.1145/1121241.1121280 R Core Team (2020). R: a language and environment for statistical computing. R Foundation for Statistical Computing. [Computer software], Available at: https://www.R-project.org/ . Rheu M. Shin J. Y. Peng W. Huh-Yoo J. (2021). Systematic review: trust-building factors and implications for conversational agent design. Int. J. Human–Computer Interact. 37 (1), 8196. 10.1080/10447318.2020.1807710 Rodero E. (2017). Effectiveness, attention, and recall of human and artificial voices in an advertising story. Prosody influence and functions of voices. Comput. Hum. Behav. 77, 336346. 10.1016/j.chb.2017.08.044 Rosenberg S. Nelson C. Vivekananthan P. S. (1968). A multidimensional approach to the structure of personality impressions. J. Personality Soc. Psychol. 9 (4), 283294. 10.1037/h0026086 Schaefer K. E. Chen J. Y. C. Szalma J. L. Hancock P. A. (2016). A meta-analysis of factors influencing the development of trust in automation: implications for understanding autonomy in future systems. Hum. Factors J. Hum. Factors Ergonomics Soc. 58 (3), 377400. 10.1177/0018720816634228 Scheunemann M. M. Cuijpers R. H. Salge C. (2020). “Warmth and competence to predict human preference of robot behavior in physical Human-Robot Interaction,” in 2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), 13401347. 10.1109/RO-MAN47096.2020.9223478 Sikorski L. D. (2005). Regional accents: a rationale for intervening and competencies required. Seminars Speech Lang. 26 (02), 118125. 10.1055/s-2005-871207 Simantiraki O. Cooke M. King S. (2018). Impact of different speech types on listening effort. Interspeech, 22672271. 10.21437/Interspeech.2018-1358 Sims V. K. Chin M. G. Lum H. C. Upham-Ellis L. Ballion T. Lagattuta N. C. (2009). Robots’ auditory cues are subject to anthropomorphism. Proc. Hum. Factors Ergonomics Soc. Annu. Meet. 53 (18), 14181421. 10.1177/154193120905301853 Slonim R. Guillen P. (2010). Gender selection discrimination: evidence from a Trust game. J. Econ. Behav. Organ. 76 (2), 385405. 10.1016/j.jebo.2010.06.016 Søraa R. A. Fostervold M. E. (2021). Social domestication of service robots: the secret lives of Automated Guided Vehicles (AGVs) at a Norwegian hospital. Int. J. Human-Computer Stud. 152, 102627. 10.1016/j.ijhcs.2021.102627 Steain A. Stanton C. J. Stevens C. J. (2019). The black sheep effect: the case of the deviant ingroup robot. PLOS ONE 14 (10), e0222975. 10.1371/journal.pone.0222975 Steinhaeusser S. C. Lein M. Donnermann M. Lugrin B. (2022). “Designing social robots’ speech in the hotel context—a series of online studies,” in 2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), 163170. 10.1109/RO-MAN53752.2022.9900668 Stickel G. (Editor) (1997). “Berliner Stadtsprache. Tradition und Umbruch,” Varietäten des Deutschen (De Gruyter), 308331. 10.1515/9783110622560-014 Sweeney S. Crestani F. (2006). Effective search results summary size and device screen size: is there a relationship? Inf. Process. Manag. 42 (4), 10561074. 10.1016/j.ipm.2005.06.007 Tajfel H. Forgas J. P. (2000). “Social categorization: cognitions, values and groups,” in Stereotypes and prejudice: essential readings, key readings in social psychology (New York, NY: Psychology Press), 4963. Tamagawa R. Watson C. I. Kuo I. H. MacDonald B. A. Broadbent E. (2011). The effects of synthesized voice accents on user perceptions of robots. Int. J. Soc. Robotics 3 (3), 253262. 10.1007/s12369-011-0100-4 Thielmann I. Hilbig B. E. (2015). Trust: an integrative review from a person–situation perspective. Rev. General Psychol. 19 (3), 249277. 10.1037/gpr0000046 Torre I. Maguer S. L. (2020). in Should robots have accents? 2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), 208214. 10.1109/RO-MAN47096.2020.9223599 Vega A. Ramírez-Benavides K. Guerrero L. A. López G. (2019). “Evaluating the Nao robot in the role of personal assistant: the effect of gender in robot performance evaluation. in 13th International Conference on Ubiquitous Computing and Ambient Intelligence UCAmI 2019, 20. 10.3390/proceedings2019031020‬‬‬‬‬‬‬‬‬ Walker A. (2019). The role of dialect experience in topic-based shifts in speech production. Lang. Var. Change 31 (2), 135163. 10.1017/S0954394519000152 Walters M. L. Syrdal D. S. Koay K. L. Dautenhahn K. Te Boekhorst R. (2008). “Human approach distances to a mechanical-looking robot with different robot voice styles,” in RO-MAN 2008 - The 17th IEEE International Symposium on Robot and Human Interactive Communication, 707712. 10.1109/ROMAN.2008.4600750 Wickens C. D. Carswell C. M. (2021). “Information processing,” in Handbook of human factors and ergonomics. Editors Salvendy G. Karwowski W . 1st ed. (Wiley), 114158. 10.1002/9781119636113.ch5 Wiese H. (2012). Kiezdeutsch: Ein neuer Dialekt entsteht . Originalausg. München: Beck. Woo H. LeTendre G. K. Pham-Shouse T. Xiong Y. (2021). The use of social robots in classrooms: a review of field-based studies. Educ. Res. Rev. 33, 100388. 10.1016/j.edurev.2021.100388 You S. Robert L. (2018). Emotional attachment, performance, and viability in teams collaborating with Embodied Physical Action (EPA) robots. J. Assoc. Inf. Syst. 19 (5), 377407. 10.17705/1jais.00496 Zhou Y. Fischer M. H. (2019). “Intimate relationships with humanoid robots: exploring human sexuality in the twenty-first century,” in AI love you. Editors Zhou Y. Fischer M. H. (Springer International Publishing), 177184. 10.1007/978-3-030-19734-6_10
      ‘Oh, my dear Thomas, you haven’t heard the terrible news then?’ she said. ‘I thought you would be sure to have seen it placarded somewhere. Alice went straight to her room, and I haven’t seen her since, though I repeatedly knocked at the door, which she has locked on the inside, and I’m sure it’s most unnatural of her not to let her own mother comfort her. It all happened in a moment: I have always said those great motor-cars shouldn’t be allowed to career about the streets, especially when they are all paved with cobbles as they are at Easton Haven, which are{331} so slippery when it’s wet. He slipped, and it went over him in a moment.’ My thanks were few and awkward, for there still hung to the missive a basting thread, and it was as warm as a nestling bird. I bent low--everybody was emotional in those days--kissed the fragrant thing, thrust it into my bosom, and blushed worse than Camille. "What, the Corner House victim? Is that really a fact?" "My dear child, I don't look upon it in that light at all. The child gave our picturesque friend a certain distinction--'My husband is dead, and this is my only child,' and all that sort of thing. It pays in society." leave them on the steps of a foundling asylum in order to insure [See larger version] Interoffice guff says you're planning definite moves on your own, J. O., and against some opposition. Is the Colonel so poor or so grasping—or what? Albert could not speak, for he felt as if his brains and teeth were rattling about inside his head. The rest of[Pg 188] the family hunched together by the door, the boys gaping idiotically, the girls in tears. "Now you're married." The host was called in, and unlocked a drawer in which they were deposited. The galleyman, with visible reluctance, arrayed himself in the garments, and he was observed to shudder more than once during the investiture of the dead man's apparel. HoME香京julia种子在线播放 ENTER NUMBET 0016www.mbservice.com.cn
      fismall.com.cn
      hosegroup.com.cn
      www.hhhtzyzs.com.cn
      llsjmc.org.cn
      rbcwmz.com.cn
      www.qm4.com.cn
      www.shtcwh.net.cn
      www.pjrbtd.com.cn
      wxfuwu.com.cn
      处女被大鸡巴操 强奸乱伦小说图片 俄罗斯美女爱爱图 调教强奸学生 亚洲女的穴 夜来香图片大全 美女性强奸电影 手机版色中阁 男性人体艺术素描图 16p成人 欧美性爱360 电影区 亚洲电影 欧美电影 经典三级 偷拍自拍 动漫电影 乱伦电影 变态另类 全部电 类似狠狠鲁的网站 黑吊操白逼图片 韩国黄片种子下载 操逼逼逼逼逼 人妻 小说 p 偷拍10幼女自慰 极品淫水很多 黄色做i爱 日本女人人体电影快播看 大福国小 我爱肏屄美女 mmcrwcom 欧美多人性交图片 肥臀乱伦老头舔阴帝 d09a4343000019c5 西欧人体艺术b xxoo激情短片 未成年人的 插泰国人夭图片 第770弾み1 24p 日本美女性 交动态 eee色播 yantasythunder 操无毛少女屄 亚洲图片你懂的女人 鸡巴插姨娘 特级黄 色大片播 左耳影音先锋 冢本友希全集 日本人体艺术绿色 我爱被舔逼 内射 幼 美阴图 喷水妹子高潮迭起 和后妈 操逼 美女吞鸡巴 鸭个自慰 中国女裸名单 操逼肥臀出水换妻 色站裸体义术 中国行上的漏毛美女叫什么 亚洲妹性交图 欧美美女人裸体人艺照 成人色妹妹直播 WWW_JXCT_COM r日本女人性淫乱 大胆人艺体艺图片 女同接吻av 碰碰哥免费自拍打炮 艳舞写真duppid1 88电影街拍视频 日本自拍做爱qvod 实拍美女性爱组图 少女高清av 浙江真实乱伦迅雷 台湾luanlunxiaoshuo 洛克王国宠物排行榜 皇瑟电影yy频道大全 红孩儿连连看 阴毛摄影 大胆美女写真人体艺术摄影 和风骚三个媳妇在家做爱 性爱办公室高清 18p2p木耳 大波撸影音 大鸡巴插嫩穴小说 一剧不超两个黑人 阿姨诱惑我快播 幼香阁千叶县小学生 少女妇女被狗强奸 曰人体妹妹 十二岁性感幼女 超级乱伦qvod 97爱蜜桃ccc336 日本淫妇阴液 av海量资源999 凤凰影视成仁 辰溪四中艳照门照片 先锋模特裸体展示影片 成人片免费看 自拍百度云 肥白老妇女 女爱人体图片 妈妈一女穴 星野美夏 日本少女dachidu 妹子私处人体图片 yinmindahuitang 舔无毛逼影片快播 田莹疑的裸体照片 三级电影影音先锋02222 妻子被外国老头操 观月雏乃泥鳅 韩国成人偷拍自拍图片 强奸5一9岁幼女小说 汤姆影院av图片 妹妹人艺体图 美女大驱 和女友做爱图片自拍p 绫川まどか在线先锋 那么嫩的逼很少见了 小女孩做爱 处女好逼连连看图图 性感美女在家做爱 近距离抽插骚逼逼 黑屌肏金毛屄 日韩av美少女 看喝尿尿小姐日逼色色色网图片 欧美肛交新视频 美女吃逼逼 av30线上免费 伊人在线三级经典 新视觉影院t6090影院 最新淫色电影网址 天龙影院远古手机版 搞老太影院 插进美女的大屁股里 私人影院加盟费用 www258dd 求一部电影里面有一个二猛哥 深肛交 日本萌妹子人体艺术写真图片 插入屄眼 美女的木奶 中文字幕黄色网址影视先锋 九号女神裸 和骚人妻偷情 和潘晓婷做爱 国模大尺度蜜桃 欧美大逼50p 西西人体成人 李宗瑞继母做爱原图物处理 nianhuawang 男鸡巴的视屏 � 97免费色伦电影 好色网成人 大姨子先锋 淫荡巨乳美女教师妈妈 性nuexiaoshuo WWW36YYYCOM 长春继续给力进屋就操小女儿套干破内射对白淫荡 农夫激情社区 日韩无码bt 欧美美女手掰嫩穴图片 日本援交偷拍自拍 入侵者日本在线播放 亚洲白虎偷拍自拍 常州高见泽日屄 寂寞少妇自卫视频 人体露逼图片 多毛外国老太 变态乱轮手机在线 淫荡妈妈和儿子操逼 伦理片大奶少女 看片神器最新登入地址sqvheqi345com账号群 麻美学姐无头 圣诞老人射小妞和强奸小妞动话片 亚洲AV女老师 先锋影音欧美成人资源 33344iucoom zV天堂电影网 宾馆美女打炮视频 色五月丁香五月magnet 嫂子淫乱小说 张歆艺的老公 吃奶男人视频在线播放 欧美色图男女乱伦 avtt2014ccvom 性插色欲香影院 青青草撸死你青青草 99热久久第一时间 激情套图卡通动漫 幼女裸聊做爱口交 日本女人被强奸乱伦 草榴社区快播 2kkk正在播放兽骑 啊不要人家小穴都湿了 www猎奇影视 A片www245vvcomwwwchnrwhmhzcn 搜索宜春院av wwwsee78co 逼奶鸡巴插 好吊日AV在线视频19gancom 熟女伦乱图片小说 日本免费av无码片在线开苞 鲁大妈撸到爆 裸聊官网 德国熟女xxx 新不夜城论坛首页手机 女虐男网址 男女做爱视频华为网盘 激情午夜天亚洲色图 内裤哥mangent 吉沢明歩制服丝袜WWWHHH710COM 屌逼在线试看 人体艺体阿娇艳照 推荐一个可以免费看片的网站如果被QQ拦截请复制链接在其它浏览器打开xxxyyy5comintr2a2cb551573a2b2e 欧美360精品粉红鲍鱼 教师调教第一页 聚美屋精品图 中韩淫乱群交 俄罗斯撸撸片 把鸡巴插进小姨子的阴道 干干AV成人网 aolasoohpnbcn www84ytom 高清大量潮喷www27dyycom 宝贝开心成人 freefronvideos人母 嫩穴成人网gggg29com 逼着舅妈给我口交肛交彩漫画 欧美色色aV88wwwgangguanscom 老太太操逼自拍视频 777亚洲手机在线播放 有没有夫妻3p小说 色列漫画淫女 午间色站导航 欧美成人处女色大图 童颜巨乳亚洲综合 桃色性欲草 色眯眯射逼 无码中文字幕塞外青楼这是一个 狂日美女老师人妻 爱碰网官网 亚洲图片雅蠛蝶 快播35怎么搜片 2000XXXX电影 新谷露性家庭影院 深深候dvd播放 幼齿用英语怎么说 不雅伦理无需播放器 国外淫荡图片 国外网站幼幼嫩网址 成年人就去色色视频快播 我鲁日日鲁老老老我爱 caoshaonvbi 人体艺术avav 性感性色导航 韩国黄色哥来嫖网站 成人网站美逼 淫荡熟妇自拍 欧美色惰图片 北京空姐透明照 狼堡免费av视频 www776eom 亚洲无码av欧美天堂网男人天堂 欧美激情爆操 a片kk266co 色尼姑成人极速在线视频 国语家庭系列 蒋雯雯 越南伦理 色CC伦理影院手机版 99jbbcom 大鸡巴舅妈 国产偷拍自拍淫荡对话视频 少妇春梦射精 开心激动网 自拍偷牌成人 色桃隐 撸狗网性交视频 淫荡的三位老师 伦理电影wwwqiuxia6commqiuxia6com 怡春院分站 丝袜超短裙露脸迅雷下载 色制服电影院 97超碰好吊色男人 yy6080理论在线宅男日韩福利大全 大嫂丝袜 500人群交手机在线 5sav 偷拍熟女吧 口述我和妹妹的欲望 50p电脑版 wwwavtttcon 3p3com 伦理无码片在线看 欧美成人电影图片岛国性爱伦理电影 先锋影音AV成人欧美 我爱好色 淫电影网 WWW19MMCOM 玛丽罗斯3d同人动画h在线看 动漫女孩裸体 超级丝袜美腿乱伦 1919gogo欣赏 大色逼淫色 www就是撸 激情文学网好骚 A级黄片免费 xedd5com 国内的b是黑的 快播美国成年人片黄 av高跟丝袜视频 上原保奈美巨乳女教师在线观看 校园春色都市激情fefegancom 偷窥自拍XXOO 搜索看马操美女 人本女优视频 日日吧淫淫 人妻巨乳影院 美国女子性爱学校 大肥屁股重口味 啪啪啪啊啊啊不要 操碰 japanfreevideoshome国产 亚州淫荡老熟女人体 伦奸毛片免费在线看 天天影视se 樱桃做爱视频 亚卅av在线视频 x奸小说下载 亚洲色图图片在线 217av天堂网 东方在线撸撸-百度 幼幼丝袜集 灰姑娘的姐姐 青青草在线视频观看对华 86papa路con 亚洲1AV 综合图片2区亚洲 美国美女大逼电影 010插插av成人网站 www色comwww821kxwcom 播乐子成人网免费视频在线观看 大炮撸在线影院 ,www4KkKcom 野花鲁最近30部 wwwCC213wapwww2233ww2download 三客优最新地址 母亲让儿子爽的无码视频 全国黄色片子 欧美色图美国十次 超碰在线直播 性感妖娆操 亚洲肉感熟女色图 a片A毛片管看视频 8vaa褋芯屑 333kk 川岛和津实视频 在线母子乱伦对白 妹妹肥逼五月 亚洲美女自拍 老婆在我面前小说 韩国空姐堪比情趣内衣 干小姐综合 淫妻色五月 添骚穴 WM62COM 23456影视播放器 成人午夜剧场 尼姑福利网 AV区亚洲AV欧美AV512qucomwwwc5508com 经典欧美骚妇 震动棒露出 日韩丝袜美臀巨乳在线 av无限吧看 就去干少妇 色艺无间正面是哪集 校园春色我和老师做爱 漫画夜色 天海丽白色吊带 黄色淫荡性虐小说 午夜高清播放器 文20岁女性荫道口图片 热国产热无码热有码 2015小明发布看看算你色 百度云播影视 美女肏屄屄乱轮小说 家族舔阴AV影片 邪恶在线av有码 父女之交 关于处女破处的三级片 极品护士91在线 欧美虐待女人视频的网站 享受老太太的丝袜 aaazhibuo 8dfvodcom成人 真实自拍足交 群交男女猛插逼 妓女爱爱动态 lin35com是什么网站 abp159 亚洲色图偷拍自拍乱伦熟女抠逼自慰 朝国三级篇 淫三国幻想 免费的av小电影网站 日本阿v视频免费按摩师 av750c0m 黄色片操一下 巨乳少女车震在线观看 操逼 免费 囗述情感一乱伦岳母和女婿 WWW_FAMITSU_COM 偷拍中国少妇在公车被操视频 花也真衣论理电影 大鸡鸡插p洞 新片欧美十八岁美少 进击的巨人神thunderftp 西方美女15p 深圳哪里易找到老女人玩视频 在线成人有声小说 365rrr 女尿图片 我和淫荡的小姨做爱 � 做爱技术体照 淫妇性爱 大学生私拍b 第四射狠狠射小说 色中色成人av社区 和小姨子乱伦肛交 wwwppp62com 俄罗斯巨乳人体艺术 骚逼阿娇 汤芳人体图片大胆 大胆人体艺术bb私处 性感大胸骚货 哪个网站幼女的片多 日本美女本子把 色 五月天 婷婷 快播 美女 美穴艺术 色百合电影导航 大鸡巴用力 孙悟空操美少女战士 狠狠撸美女手掰穴图片 古代女子与兽类交 沙耶香套图 激情成人网区 暴风影音av播放 动漫女孩怎么插第3个 mmmpp44 黑木麻衣无码ed2k 淫荡学姐少妇 乱伦操少女屄 高中性爱故事 骚妹妹爱爱图网 韩国模特剪长发 大鸡巴把我逼日了 中国张柏芝做爱片中国张柏芝做爱片中国张柏芝做爱片中国张柏芝做爱片中国张柏芝做爱片 大胆女人下体艺术图片 789sss 影音先锋在线国内情侣野外性事自拍普通话对白 群撸图库 闪现君打阿乐 ady 小说 插入表妹嫩穴小说 推荐成人资源 网络播放器 成人台 149大胆人体艺术 大屌图片 骚美女成人av 春暖花开春色性吧 女亭婷五月 我上了同桌的姐姐 恋夜秀场主播自慰视频 yzppp 屄茎 操屄女图 美女鲍鱼大特写 淫乱的日本人妻山口玲子 偷拍射精图 性感美女人体艺木图片 种马小说完本 免费电影院 骑士福利导航导航网站 骚老婆足交 国产性爱一级电影 欧美免费成人花花性都 欧美大肥妞性爱视频 家庭乱伦网站快播 偷拍自拍国产毛片 金发美女也用大吊来开包 缔D杏那 yentiyishu人体艺术ytys WWWUUKKMCOM 女人露奶 � 苍井空露逼 老荡妇高跟丝袜足交 偷偷和女友的朋友做爱迅雷 做爱七十二尺 朱丹人体合成 麻腾由纪妃 帅哥撸播种子图 鸡巴插逼动态图片 羙国十次啦中文 WWW137AVCOM 神斗片欧美版华语 有气质女人人休艺术 由美老师放屁电影 欧美女人肉肏图片 白虎种子快播 国产自拍90后女孩 美女在床上疯狂嫩b 饭岛爱最后之作 幼幼强奸摸奶 色97成人动漫 两性性爱打鸡巴插逼 新视觉影院4080青苹果影院 嗯好爽插死我了 阴口艺术照 李宗瑞电影qvod38 爆操舅母 亚洲色图七七影院 被大鸡巴操菊花 怡红院肿么了 成人极品影院删除 欧美性爱大图色图强奸乱 欧美女子与狗随便性交 苍井空的bt种子无码 熟女乱伦长篇小说 大色虫 兽交幼女影音先锋播放 44aad be0ca93900121f9b 先锋天耗ばさ无码 欧毛毛女三级黄色片图 干女人黑木耳照 日本美女少妇嫩逼人体艺术 sesechangchang 色屄屄网 久久撸app下载 色图色噜 美女鸡巴大奶 好吊日在线视频在线观看 透明丝袜脚偷拍自拍 中山怡红院菜单 wcwwwcom下载 骑嫂子 亚洲大色妣 成人故事365ahnet 丝袜家庭教mp4 幼交肛交 妹妹撸撸大妈 日本毛爽 caoprom超碰在email 关于中国古代偷窥的黄片 第一会所老熟女下载 wwwhuangsecome 狼人干综合新地址HD播放 变态儿子强奸乱伦图 强奸电影名字 2wwwer37com 日本毛片基地一亚洲AVmzddcxcn 暗黑圣经仙桃影院 37tpcocn 持月真由xfplay 好吊日在线视频三级网 我爱背入李丽珍 电影师傅床戏在线观看 96插妹妹sexsex88com 豪放家庭在线播放 桃花宝典极夜著豆瓜网 安卓系统播放神器 美美网丝袜诱惑 人人干全免费视频xulawyercn av无插件一本道 全国色五月 操逼电影小说网 good在线wwwyuyuelvcom www18avmmd 撸波波影视无插件 伊人幼女成人电影 会看射的图片 小明插看看 全裸美女扒开粉嫩b 国人自拍性交网站 萝莉白丝足交本子 七草ちとせ巨乳视频 摇摇晃晃的成人电影 兰桂坊成社人区小说www68kqcom 舔阴论坛 久撸客一撸客色国内外成人激情在线 明星门 欧美大胆嫩肉穴爽大片 www牛逼插 性吧星云 少妇性奴的屁眼 人体艺术大胆mscbaidu1imgcn 最新久久色色成人版 l女同在线 小泽玛利亚高潮图片搜索 女性裸b图 肛交bt种子 最热门有声小说 人间添春色 春色猜谜字 樱井莉亚钢管舞视频 小泽玛利亚直美6p 能用的h网 还能看的h网 bl动漫h网 开心五月激 东京热401 男色女色第四色酒色网 怎么下载黄色小说 黄色小说小栽 和谐图城 乐乐影院 色哥导航 特色导航 依依社区 爱窝窝在线 色狼谷成人 91porn 包要你射电影 色色3A丝袜 丝袜妹妹淫网 爱色导航(荐) 好男人激情影院 坏哥哥 第七色 色久久 人格分裂 急先锋 撸撸射中文网 第一会所综合社区 91影院老师机 东方成人激情 怼莪影院吹潮 老鸭窝伊人无码不卡无码一本道 av女柳晶电影 91天生爱风流作品 深爱激情小说私房婷婷网 擼奶av 567pao 里番3d一家人野外 上原在线电影 水岛津实透明丝袜 1314酒色 网旧网俺也去 0855影院 在线无码私人影院 搜索 国产自拍 神马dy888午夜伦理达达兔 农民工黄晓婷 日韩裸体黑丝御姐 屈臣氏的燕窝面膜怎么样つぼみ晶エリーの早漏チ○ポ强化合宿 老熟女人性视频 影音先锋 三上悠亚ol 妹妹影院福利片 hhhhhhhhsxo 午夜天堂热的国产 强奸剧场 全裸香蕉视频无码 亚欧伦理视频 秋霞为什么给封了 日本在线视频空天使 日韩成人aⅴ在线 日本日屌日屄导航视频 在线福利视频 日本推油无码av magnet 在线免费视频 樱井梨吮东 日本一本道在线无码DVD 日本性感诱惑美女做爱阴道流水视频 日本一级av 汤姆avtom在线视频 台湾佬中文娱乐线20 阿v播播下载 橙色影院 奴隶少女护士cg视频 汤姆在线影院无码 偷拍宾馆 业面紧急生级访问 色和尚有线 厕所偷拍一族 av女l 公交色狼优酷视频 裸体视频AV 人与兽肉肉网 董美香ol 花井美纱链接 magnet 西瓜影音 亚洲 自拍 日韩女优欧美激情偷拍自拍 亚洲成年人免费视频 荷兰免费成人电影 深喉呕吐XXⅩX 操石榴在线视频 天天色成人免费视频 314hu四虎 涩久免费视频在线观看 成人电影迅雷下载 能看见整个奶子的香蕉影院 水菜丽百度影音 gwaz079百度云 噜死你们资源站 主播走光视频合集迅雷下载 thumbzilla jappen 精品Av 古川伊织star598在线 假面女皇vip在线视频播放 国产自拍迷情校园 啪啪啪公寓漫画 日本阿AV 黄色手机电影 欧美在线Av影院 华裔电击女神91在线 亚洲欧美专区 1日本1000部免费视频 开放90后 波多野结衣 东方 影院av 页面升级紧急访问每天正常更新 4438Xchengeren 老炮色 a k福利电影 色欲影视色天天视频 高老庄aV 259LUXU-683 magnet 手机在线电影 国产区 欧美激情人人操网 国产 偷拍 直播 日韩 国内外激情在线视频网给 站长统计一本道人妻 光棍影院被封 紫竹铃取汁 ftp 狂插空姐嫩 xfplay 丈夫面前 穿靴子伪街 XXOO视频在线免费 大香蕉道久在线播放 电棒漏电嗨过头 充气娃能看下毛和洞吗 夫妻牲交 福利云点墦 yukun瑟妃 疯狂交换女友 国产自拍26页 腐女资源 百度云 日本DVD高清无码视频 偷拍,自拍AV伦理电影 A片小视频福利站。 大奶肥婆自拍偷拍图片 交配伊甸园 超碰在线视频自拍偷拍国产 小热巴91大神 rctd 045 类似于A片 超美大奶大学生美女直播被男友操 男友问 你的衣服怎么脱掉的 亚洲女与黑人群交视频一 在线黄涩 木内美保步兵番号 鸡巴插入欧美美女的b舒服 激情在线国产自拍日韩欧美 国语福利小视频在线观看 作爱小视颍 潮喷合集丝袜无码mp4 做爱的无码高清视频 牛牛精品 伊aⅤ在线观看 savk12 哥哥搞在线播放 在线电一本道影 一级谍片 250pp亚洲情艺中心,88 欧美一本道九色在线一 wwwseavbacom色av吧 cos美女在线 欧美17,18ⅹⅹⅹ视频 自拍嫩逼 小电影在线观看网站 筱田优 贼 水电工 5358x视频 日本69式视频有码 b雪福利导航 韩国女主播19tvclub在线 操逼清晰视频 丝袜美女国产视频网址导航 水菜丽颜射房间 台湾妹中文娱乐网 风吟岛视频 口交 伦理 日本熟妇色五十路免费视频 A级片互舔 川村真矢Av在线观看 亚洲日韩av 色和尚国产自拍 sea8 mp4 aV天堂2018手机在线 免费版国产偷拍a在线播放 狠狠 婷婷 丁香 小视频福利在线观看平台 思妍白衣小仙女被邻居强上 萝莉自拍有水 4484新视觉 永久发布页 977成人影视在线观看 小清新影院在线观 小鸟酱后丝后入百度云 旋风魅影四级 香蕉影院小黄片免费看 性爱直播磁力链接 小骚逼第一色影院 性交流的视频 小雪小视频bd 小视频TV禁看视频 迷奸AV在线看 nba直播 任你在干线 汤姆影院在线视频国产 624u在线播放 成人 一级a做爰片就在线看狐狸视频 小香蕉AV视频 www182、com 腿模简小育 学生做爱视频 秘密搜查官 快播 成人福利网午夜 一级黄色夫妻录像片 直接看的gav久久播放器 国产自拍400首页 sm老爹影院 谁知道隔壁老王网址在线 综合网 123西瓜影音 米奇丁香 人人澡人人漠大学生 色久悠 夜色视频你今天寂寞了吗? 菲菲影视城美国 被抄的影院 变态另类 欧美 成人 国产偷拍自拍在线小说 不用下载安装就能看的吃男人鸡巴视频 插屄视频 大贯杏里播放 wwwhhh50 233若菜奈央 伦理片天海翼秘密搜查官 大香蕉在线万色屋视频 那种漫画小说你懂的 祥仔电影合集一区 那里可以看澳门皇冠酒店a片 色自啪 亚洲aV电影天堂 谷露影院ar toupaizaixian sexbj。com 毕业生 zaixian mianfei 朝桐光视频 成人短视频在线直接观看 陈美霖 沈阳音乐学院 导航女 www26yjjcom 1大尺度视频 开平虐女视频 菅野雪松协和影视在线视频 华人play在线视频bbb 鸡吧操屄视频 多啪啪免费视频 悠草影院 金兰策划网 (969) 橘佑金短视频 国内一极刺激自拍片 日本制服番号大全magnet 成人动漫母系 电脑怎么清理内存 黄色福利1000 dy88午夜 偷拍中学生洗澡磁力链接 花椒相机福利美女视频 站长推荐磁力下载 mp4 三洞轮流插视频 玉兔miki热舞视频 夜生活小视频 爆乳人妖小视频 国内网红主播自拍福利迅雷下载 不用app的裸裸体美女操逼视频 变态SM影片在线观看 草溜影院元气吧 - 百度 - 百度 波推全套视频 国产双飞集合ftp 日本在线AV网 笔国毛片 神马影院女主播是我的邻居 影音资源 激情乱伦电影 799pao 亚洲第一色第一影院 av视频大香蕉 老梁故事汇希斯莱杰 水中人体磁力链接 下载 大香蕉黄片免费看 济南谭崔 避开屏蔽的岛a片 草破福利 要看大鸡巴操小骚逼的人的视频 黑丝少妇影音先锋 欧美巨乳熟女磁力链接 美国黄网站色大全 伦蕉在线久播 极品女厕沟 激情五月bd韩国电影 混血美女自摸和男友激情啪啪自拍诱人呻吟福利视频 人人摸人人妻做人人看 44kknn 娸娸原网 伊人欧美 恋夜影院视频列表安卓青青 57k影院 如果电话亭 avi 插爆骚女精品自拍 青青草在线免费视频1769TV 令人惹火的邻家美眉 影音先锋 真人妹子被捅动态图 男人女人做完爱视频15 表姐合租两人共处一室晚上她竟爬上了我的床 性爱教学视频 北条麻妃bd在线播放版 国产老师和师生 magnet wwwcctv1024 女神自慰 ftp 女同性恋做激情视频 欧美大胆露阴视频 欧美无码影视 好女色在线观看 后入肥臀18p 百度影视屏福利 厕所超碰视频 强奸mp magnet 欧美妹aⅴ免费线上看 2016年妞干网视频 5手机在线福利 超在线最视频 800av:cOm magnet 欧美性爱免播放器在线播放 91大款肥汤的性感美乳90后邻家美眉趴着窗台后入啪啪 秋霞日本毛片网站 cheng ren 在线视频 上原亚衣肛门无码解禁影音先锋 美脚家庭教师在线播放 尤酷伦理片 熟女性生活视频在线观看 欧美av在线播放喷潮 194avav 凤凰AV成人 - 百度 kbb9999 AV片AV在线AV无码 爱爱视频高清免费观看 黄色男女操b视频 观看 18AV清纯视频在线播放平台 成人性爱视频久久操 女性真人生殖系统双性人视频 下身插入b射精视频 明星潜规测视频 mp4 免賛a片直播绪 国内 自己 偷拍 在线 国内真实偷拍 手机在线 国产主播户外勾在线 三桥杏奈高清无码迅雷下载 2五福电影院凸凹频频 男主拿鱼打女主,高宝宝 色哥午夜影院 川村まや痴汉 草溜影院费全过程免费 淫小弟影院在线视频 laohantuiche 啪啪啪喷潮XXOO视频 青娱乐成人国产 蓝沢润 一本道 亚洲青涩中文欧美 神马影院线理论 米娅卡莉法的av 在线福利65535 欧美粉色在线 欧美性受群交视频1在线播放 极品喷奶熟妇在线播放 变态另类无码福利影院92 天津小姐被偷拍 磁力下载 台湾三级电髟全部 丝袜美腿偷拍自拍 偷拍女生性行为图 妻子的乱伦 白虎少妇 肏婶骚屄 外国大妈会阴照片 美少女操屄图片 妹妹自慰11p 操老熟女的b 361美女人体 360电影院樱桃 爱色妹妹亚洲色图 性交卖淫姿势高清图片一级 欧美一黑对二白 大色网无毛一线天 射小妹网站 寂寞穴 西西人体模特苍井空 操的大白逼吧 骚穴让我操 拉好友干女朋友3p