The increasing adoption of AI in critical sectors such as healthcare, finance, transportation, and public services raises significant challenges related to transparency, accountability, and trust in automated decision-making processes, particularly since many AI models still operate as black boxes that are difficult to interpret and audit. This study investigates the potential of integrating blockchain technology to enable trustworthy and transparent AI decision-making and is conducted under the framework to systematically design, implement, and evaluate the proposed solution. The proposed framework records AI inference results and relevant metadata onto the blockchain through smart contracts to ensure data immutability and traceability. A prototype system is developed and evaluated using a mixed-method approach, combining qualitative analysis of transparency and auditability with quantitative measurements of system performance such as latency and overhead. The results demonstrate that blockchain integration significantly enhances auditability, data integrity, and user trust compared to conventional AI systems. However, several limitations are identified, including scalability issues, transaction costs, and increased latency caused by on-chain recording processes. Despite these challenges, the proposed approach shows strong potential to improve the accountability of AI systems in high-risk environments and contributes a practical framework along with empirical insights for organizations seeking to adopt transparent and reliable AI, while also opening opportunities for further development through architectural optimization and the adoption of layer-2 blockchain technologies.
Copyrights © 2026