Providing high-quality feedback on students’ solution steps in transformational geometry is challenging in large university classes. Explainable AI (XAI) offers a potential way to automate step-level assessment while keeping model decisions transparent and educationally meaningful. This study examines whether an XAI-based system can validly and reliably score students’ solution steps in transformational geometry, how faithful and fair its explanations are, and whether step-level XAI feedback improves learning in an authentic course setting. This study used a two-phase quantitative design complemented by a small qualitative component. In Phase 1, XAI-based step scores were compared with expert ratings of items involving reflections, rotations, translations, and compositions of transformations, using a rubric with eight indicators (GT1–GT8), and explanation fidelity and subgroup fairness were evaluated. In Phase 2, a clustered quasi-experiment was conducted comparing XAI-based feedback with conventional rubric-based feedback in two classes. Brief and semi-structured interviews were conducted with six students from the XAI class to explore how they interpreted and used the feedback. The results show that the XAI system approximated expert step scoring with acceptable agreement, produced explanations whose highlighted features were meaningfully related to predictions, and exhibited no large performance disparities across gender or study programme. In the classroom experiment, the XAI group achieved moderately higher post-test scores than the control group, with gains concentrated on indicators related to parameter specification and composition of transformations. Interview data suggest that students used the XAI interface to locate and revise specific steps while still relying on the lecturer for deeper conceptual clarification. Overall, the findings indicate that when aligned with a domain-specific rubric, XAI-based step assessment can serve as scalable, task- and process-level formative feedback in transformational geometry, best used in a human-in-the-loop configuration that complements rather than replaces teacher feedback. Keywords: artificial intelligence, mathematics assessment, quasi-experimental design, transformational geometry.