This paper investigates the provocative question: can artificial intelligence (AI) know things? through an epistemological lens. Drawing upon a systematic literature review (SLR) of works published 2010–2020, the study maps how scholars have applied classical and contemporary epistemic criteria—such as belief‑likeness, truth, justification, reliability, interpretability, and epistemic agency—to AI systems. In doing so, it examines competing theoretical frameworks (internalism, externalism, virtue epistemology, Bayesian approaches) and identifies areas of convergence and contention. The review reveals that while many AI systems satisfy externalist criteria of reliability and truth‑tracking under controlled conditions, they often fall short of internalist demands for justificatory transparency or reflective access. Opacity and “black‑box” architectures remain central obstacles to attributing knowledge in the classical sense. Furthermore, the influence of AI on human belief formation and the shift in epistemic environments suggest that even absent true knowledge, AI plays a significant role in mediating knowledge practices. Ethical and normative considerations (e.g. fairness, accountability, epistemic justice) also emerge as inseparable from epistemological assessments, prompting calls for a “glass‑box epistemology” that integrates design, interpretability, and value sensitivity. In concluding, the paper argues that AI may function as a contributor to human knowledge workflows rather than as autonomous knowers. It sets out a nuanced perspective: acknowledging AI’s epistemic potential while remaining critical of overextensions. Finally, it suggests future paths: refining epistemic thresholds, embedding interpretability in AI design, and expanding the discourse across cultural and disciplinary contexts.