Explainable Artificial Intelligence (XAI) is increasingly recognized as a critical enabler of trust and ethical AI adoption. This study explores the impact of XAI on user trust and ethical AI adoption within Indonesian academia through a qualitative analysis of five informants, including AI researchers, university administrators, and policymakers. The findings reveal that XAI enhances transparency and ethical awareness while fostering trust among academic stakeholders. However, technical complexity, resource limitations, and resistance to change pose significant barriers to implementation. The study also identifies opportunities for fostering XAI adoption, such as collaborative initiatives, government support, and tailored training programs. These insights contribute to the growing discourse on leveraging XAI for promoting ethical and trustworthy AI practices in academia.
                        
                        
                        
                        
                            
                                Copyrights © 2025