Critical literacy within digital contexts has been a central topic of academic debate, especially with the exponential growth of Generative Artificial Intelligence (GAI). This emerging technology presents challenges and opportunities in the development of academic texts, as is the case with the ethical and cognitive comprehension issues. The purpose of this article was to analyze in depth the influence of different GAIs (Deepseek, ChatGPT, and Gemini) on improving academic texts, evaluating their role as tools to support critical literacy and exploring the cognitive processes involved in their use. This study is qualitative in nature and employs a case study based on a theoretical-practical analysis. The instructional prototypes of the GAIs were fed with a limited database on critical literacy and a specific prompt that allowed them to provide feedback on different types of texts. The results indicate that the various GAIs managed to position themselves as tools capable of guiding the development of critical literacy and even enhancing it. The models exhibited a tendency to be repetitive and to prioritize certain actions related to specific cognitive processes. This repetitiveness, if unaddressed, risks fostering mechanical engagement rather than genuine critical reflection, thereby limiting the development of autonomous critical thinking in students. Clear differences in the quality and focus of the feedback among each of the GAIs were identified, suggesting plausible explanations for how they are used in different cognitive functions. It is concluded that GAI is a promising mediating agent in the development of critical literacy, but its impact on cognitive processes (such as reflection or critical thinking) directly depends on its instructional design and the intrinsic characteristics of each model.
Copyrights © 2026