Artificial Intelligence (AI) has rapidly transformed various industries, providing significant benefits in automation, decision-making, and efficiency. However, AI also presents numerous risks, including bias, lack of transparency, security vulnerabilities, and regulatory challenges. This study employs a Systematic Literature Review (SLR) approach to identify and categorize key risks associated with AI implementation. The findings indicate that AI risks can be classified into technological, social, and regulatory aspects, each posing unique challenges. Algorithmic bias, privacy concerns, and the lack of global AI governance frameworks highlight the need for more robust risk mitigation strategies. To address these challenges, this study recommends enhancing fairness-aware AI models, strengthening AI governance, and increasing public AI literacy. Future research should focus on improving AI accountability, security measures, and ethical guidelines to ensure responsible AI adoption.
Copyrights © 2025