There is rich evidence which supports that trust is essential for both human-AI collaboration and teamwork. Nevertheless, our understanding of trust in human-AI teams is limited. Given that Artificial Intelligence (AI) increasingly becomes an active member in work teams, the present study explored how team trust emerged in human-AI teams compared to human-human teams. We conducted two experimental studies (NStudy1 = 247 teams and NStudy2 = 106 teams, 828 participants overall) and investigated how team composition (AI or human team member) impacted interpersonal trust (affective and cognitive) and thereby, team trust. In contrast to our expectations, our findings showed that being in a human-AI rather than a human-human team neither had a direct nor an indirect impact on interpersonal trust via perceived trustworthiness and perceived similarity, and on team trust via interpersonal trust. However, team identification and cognitive interpersonal trust were lower in two-member human-AI teams than in two-member human-human teams. Furthermore, we showed that in both human-AI and human-human teams, perceived trustworthiness and perceived similarity resulted in higher team trust via interpersonal trust. With the present research, we moved beyond the one-sided investigation of a dyadic human-AI interaction and considered different trust relationships that exist within a team. We integrated work across levels and disciplines to provide first evidence for the investigation of trust in human-AI teams.
Hybrid from REC M4.02
Sprekers
- Eleni Georganta (Universiteit van Amsterdan)
Locatie
Plantage Muidergracht 12,1018 TV Amsterdam