Predictive analytics has become an important component of learning analytics in higher education, enabling institutions to identify academic risks and support student success through data-driven decision making. However, many existing academic outcome prediction models rely on complex black-box machine learning techniques that provide high predictive performance but limited transparency and interpretability. The lack of explainability restricts the practical adoption of such models in educational environments where accountability, trust, and ethical decision-making are essential. This study proposes an interpretable machine learning framework for multi-class academic outcome prediction using the Explainable Boosting Machine (EBM), a glass-box model that combines the predictive power of ensemble boosting with the transparency of generalized additive models. The proposed framework was evaluated using a publicly available Student Performance and Learning Behavior dataset consisting of 6,519 student records containing academic, behavioral, and demographic attributes. Academic outcomes were formulated as a four-class classification task: Distinction, Pass, Fail, and Withdrawn. Model performance was assessed using multiple evaluation metrics including accuracy, precision, recall, F1-score, and ROC–AUC analysis. Experimental results demonstrate that the proposed EBM model achieves an accuracy of 88% and an overall ROC–AUC score of 0.91, indicating strong predictive capability across outcome categories. Furthermore, the model provides native interpretability through feature contribution functions and SHAP-based explanations, enabling both global and instance-level understanding of predictions. The results demonstrate that reliable academic outcome prediction can be achieved without sacrificing interpretability, providing a transparent and trustworthy decision-support framework for educational analytics.