The integration of Artificial Intelligence (AI) into legal decision-making processes has introduced significant advancements in efficiency and predictive capability. However, its implications for justice—particularly fairness, impartiality, transparency, and due process—remain critically debated. This study employs a Systematic Literature Review (SLR) methodology to examine how AI-driven legal decision-making aligns with classical and contemporary philosophical concepts of justice. Drawing on 48 peer-reviewed articles, policy documents, and case studies published between 2015 and 2024, the research identifies four core thematic issues: the persistence of algorithmic bias, the lack of transparency in AI systems, inconsistencies in global regulatory frameworks, and the misalignment of AI logic with moral reasoning. While AI offers promising tools for streamlining judicial processes, its application often risks reinforcing existing inequities and undermining legal principles such as corrective justice and procedural fairness. The study concludes with targeted recommendations for the development of transparent, accountable, and ethically governed AI systems that support—rather than supplant—human judicial discretion. This research contributes to the growing discourse on legal AI by highlighting the necessity of embedding justice-oriented values at the core of technological innovation in the legal sector. This research has several limitations: not based on empirical findings and no validations from experts both in AI and in legal theories. Future research should address these limitations.