This study analyzes the legislative framework governing the use of artificial intelligence (AI) in the European Union, focusing on patterns of legal convergence and divergence, as well as the governance challenges arising from its implementation. The research aims to examine how the EU constructs a harmonized yet flexible regulatory regime capable of addressing the multifaceted risks of AI while promoting innovation. Methodologically, this study employs a qualitative approach through doctrinal legal analysis and policy review, drawing on primary legal instruments, including the EU AI Act, as well as secondary sources such as policy reports and academic literature. The findings indicate that the EU adopts a risk-based regulatory model that classifies AI systems into low, medium, and high-risk categories. While most AI applications fall into low- or medium-risk categories, high-risk systems—particularly those used in sensitive sectors such as healthcare, justice, employment, and finance—pose significant legal and ethical challenges. The study identifies key risks, including algorithmic bias, data privacy violations, and a lack of transparency, alongside broader concerns about accountability and the protection of fundamental rights. Furthermore, although legal convergence is evident in the establishment of uniform EU standards, divergence persists in national implementation, enforcement practices, and institutional readiness across member states. This study contributes to the existing literature by providing a comprehensive analysis of the interplay between harmonization and fragmentation in EU AI regulation. It also highlights the need for adaptive governance mechanisms that balance regulatory consistency with contextual flexibility. Ultimately, the research underscores that effective AI legislation must strengthen accountability, ensure ethical compliance, and foster public trust, thereby aligning technological development with the core values of the European Union.
Copyrights © 2026