Can We Trust AI? Exploring the Effect of Estimated Accuracy and the Actual Performance of Ai Systems on Human-AI Collaboration

Author: 

Zhaohua Deng
Dan Song
Richard Evans

Abstract: 

Artificial Intelligence (AI) is increasingly being viewed as critical for organizational decision-making and the long-term competitiveness of firms, demanding upskilling in human-AI interaction and delegation. While trust, informed by estimated AI accuracy, is critical for such collaboration, inconsistencies between these estimates and the actual performance of AI systems often occur, potentially leading to negative outcomes. However, the effect of this inconsistency between estimated accuracy and actual performance on human-AI collaboration is not well understood in current literature. Grounded in signaling theory and expectancy violation theory, this study presents a 2 × 2 between-subjects online experiment with the aim of examining the effects of estimated accuracy and actual performance on several dependent variables. The study’s results show that while estimated accuracy strongly influences humans’ cognitive trust, the inconsistency between estimated accuracy and the actual performance of AI systems leads to misplaced trust, with humans over-trusting low-performing AI systems or distrusting high-performing ones. Such misplaced trust reduces human-AI collaboration performance by weakening the complementarity between humans and AI. These findings contribute to current understanding of the sources and consequences of human trust in AI systems and provide practical guidance for firms wanting to improve human-AI collaborative performance.

Key Word: 

Published Date: 

August, 2025

Full File: