he rapid integration of Artificial Intelligence (AI) into auditing practices has transformed how auditors perform analytical and judgmental tasks, yet it has also raised fundamental questions regarding trust in AI-driven audit tools. This study explores how auditors develop, maintain, and negotiate trust when interacting with AI systems characterized by “black box” decision-making. Using a qualitative research design, semi-structured interviews were conducted with professional auditors from diverse practices who have direct experience with AI-assisted audit tools. Thematic analysis revealed three key dimensions of trust formation: perceived transparency of algorithmic processes, auditors’ professional judgment and accountability concerns, and organizational norms surrounding AI implementation. Findings suggest that trust is neither static nor solely technology-driven but emerges through continuous cognitive and social negotiation between human expertise and algorithmic outputs. This study contributes to the growing literature on AI adoption in auditing by providing a conceptual model of auditor trust formation and offering practical insights for audit firms and technology developers aiming to enhance the interpretability, accountability, and acceptance of AI-based audit systems.
Copyrights © 2026