This mixed-methods study examines whether backpropagation-based deep learning (DL) visualizations can strengthen metacognition and learning outcomes in a university Linear Programming course. Sixty undergraduates (8-week blended format) completed pre/post cognitive tests and the Metacognitive Awareness Inventory (MAI), while their LMS activity traces (e.g., time-on-task, revision frequency, error types) trained a multilayer perceptron. The intervention exposed students to DL visual artifacts—loss curves, gradient/weight updates, and error heatmaps—as reflective scaffolds linking machine error correction to human self-regulation. Quantitatively, mean test scores increased from 61.23 to 80.57 (paired-t, p < .001), and total MAI rose from 135.40 to 159.85 (paired-t, p < .001). Gains concentrated in regulation of cognition (monitoring/evaluation). Metacognitive improvement correlated with achievement (Pearson r = .62, p < .001). Computationally, model loss decreased from 0.25 to 0.03 over 200 epochs with 89.4% validation accuracy; Dynamic Time Warping = 0.81 (p < .01) indicated strong temporal alignment between DL loss minimization and students’ learning curves. Qualitatively, thematic analysis of weekly reflections and interviews revealed a progression from error recognition to strategy adjustment and reflective transformation, recasting errors as actionable signals. Triangulating quantitative, computational, and qualitative strands supports the Cognitive Backpropagation Learning (CBL) framework: DL error feedback parallels human metacognitive feedback, and its visualization functions as a digital mirror that externalizes reflection. Findings recommend interpretable DL dashboards as practical, class-deployable scaffolds to cultivate reflective, adaptive mathematical thinkers.
Copyrights © 2025