This study introduces the Ethical Behavioral Analytics Framework (EBAF), a fairness-driven and explainable artificial intelligence system designed to predict and prevent academic misconduct. The framework integrates behavioral analytics, deep learning (LSTM), and human oversight to ensure ethical transparency and accountability in academic integrity management. By combining behavioral indicators such as submission timing, editing duration, and engagement regularity with textual features, EBAF identifies deviations from normal learning behavior that may indicate misconduct. Using a dataset of student behavioral and performance data sourced from Kaggle, the model achieved an overall accuracy of 85%, effectively distinguishing between authentic and plagiarized submissions while maintaining minimal bias. The incorporation of explainable AI tools, including SHAP and LIME, provided interpretable reasoning behind predictions, allowing educators to understand and validate model decisions. A human-in-the-loop mechanism further ensured that automated outputs were reviewed contextually, promoting fairness, accountability, and trust. The findings demonstrate that ethical and explainable AI can coexist with high predictive performance, advancing the responsible application of machine learning in education. By embedding fairness auditing, transparency, and human oversight, EBAF transforms academic misconduct detection from a punitive process into a preventive and educational approach. This work contributes to both research and practice by aligning computational intelligence with ethical accountability. Future research will expand the framework across diverse academic environments, incorporating multimodal behavioral data and adaptive feedback systems to enhance fairness, interpretability, and scalability in AI-based academic integrity solutions.
Copyrights © 2025