This study aims to normatively examine the claim that Artificial Intelligence (AI), particularly large language models (LLMs), can be said to “know” in an epistemological sense. Using a qualitative approach based on conceptual-argumentative analysis and systematic literature review, this study analyzes reputable scientific literature published in the last five years on the epistemology of AI, knowledge justification, explainable AI, and epistemic trust and authority. The analysis was conducted through mapping of key themes and critical evaluation of the philosophical premises underlying the claim of machine knowledge. The results show that although AI is capable of producing accurate and useful outputs, these systems do not meet the normative requirements of knowledge because they lack an epistemic subject, an attitude toward truth, and epistemic responsibility. Reliabilistic and explainable AI approaches only provide functional justification, not normative justification in the classical or contemporary epistemological sense. The novelty of this research lies in its assertion that the issue of AI knowledge is conceptual and normative, not merely technical, and in its reinforcement of the social and distributed epistemological framework in understanding the role of AI. The philosophical implications of this research emphasize the need to maintain the concept of “knowing” as a strict normative category, in order to prevent the erosion of human epistemic responsibility in increasingly technology-mediated knowledge practices.
Copyrights © 2026