Rapid diffusion of AI into higher education is reshaping the cognitive ecology of learning and introduces risks of cognitive offloading and automation bias in accounting programs where high-order judgment and ethics remain non-automatable. This descriptive qualitative study sought to describe how UNNES Accounting Education students enact critical thinking while working with AI, examine the moderating roles of digital literacy and self-regulated learning (SRL), and identify pedagogical moves that curb automation bias. Data were gathered from purposively selected second-semester students through a three-stage process—context scans of syllabi/LMS, non-participant classroom observations, and 45–60-minute semi-structured interviews augmented by artifacts such as AI chat excerpts and annotated drafts—and were coded using Miles–Huberman iterative procedures with triangulation, member checking, and an audit trail. Results indicate that students frequently used AI as a “first resort”; high dependence aligned with strengths in remembering/applying but weaknesses in analyzing/evaluating/creating. Conversely, higher digital literacy and SRL correlated with systematic verification, stronger justification, and reduced automation bias. Active-learning routines (trigger questions, guided discussion, “AI-audit” checklists) reliably elevated higher-order performance, while ethical concerns about originality and fairness surfaced among stronger reasoners. Overall, AI operates as a double-edged tool—impeding critical thinking when used uncritically but scaffolding it when embedded in reflective, evidence-seeking routines. Findings inform curriculum redesign, lecturer development, assessment rubrics, and assurance-of-learning aligned with professional standards. Future research should test causal effects of targeted micro-interventions in mixed-methods, multi-site designs, validate critical-thinking rubrics for AI-rich tasks, and track transfer to authentic practice.