In response to the growing application of artificial intelligence (AI) in industrial quality control (QC), this study explores how human users perceive the reliability of AI systems in manufacturing environments. While the technical capabilities of AI—including high-speed defect detection and pattern recognition—are well-documented, the human dimension of trust and perceived system reliability remains underexplored. Adopting a qualitative literature-based approach grounded in interpretivist methodology, this research systematically analyzes academic publications, empirical case studies, and theoretical contributions from fields such as human-computer interaction, industrial engineering, and cognitive psychology. Through thematic analysis of 75 peer-reviewed articles published between 2010 and 2024, the study identifies key factors that influence how reliability is perceived, including consistency, explainability, interface design, organizational culture, and user training. The findings suggest that perceived AI reliability is a dynamic, context-dependent construct shaped by both system attributes and the sociotechnical environment in which the AI operates. Specifically, the presence of transparent feedback mechanisms and adaptive explanations significantly enhances trust, while opaque decision-making processes and poor user alignment can erode perceived reliability even when actual performance is high. The study concludes by offering theoretical implications for human-AI interaction models and managerial strategies for effective AI deployment in quality assurance workflows. Ultimately, it underscores the need for human-centered AI design that aligns technological efficiency with psychological credibility and organizational readiness, thus paving the way for sustainable integration of AI in industrial quality control.
Copyrights © 2024