This Author published in this journals
All Journal Telematika
Saputra, Dhanar Intan Surya
Magister of Computer Science, Amikom Purwokerto University, Indonesia

Published : 1 Documents Claim Missing Document
Claim Missing Document
Check
Articles

Found 1 Documents
Search

Fairness Auditing and Bias Mitigation in Aspect-Based Sentiment Models for Indonesian Public Services Jondien, Muhammad Shihab Fathurrahman; Hariguna, Taqwa; Saputra, Dhanar Intan Surya
Telematika Vol 19, No 1: February (2026)
Publisher : Universitas Amikom Purwokerto

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.35671/telematika.v19i1.3269

Abstract

This study presents a comprehensive fairness audit and bias mitigation framework for Indonesian sentiment analysis using the SmSA IndoNLU dataset and the IndoBERT language model. The research investigates demographic and linguistic fairness by evaluating model performance across gender and regional groups and introduces an aspect-based extension to assess semantic fairness using an ABSA-style input formulation. Fairness metrics such as ΔF1, Demographic Parity Difference (DPD), and Equality of Opportunity were employed to quantify disparities in model behavior. The baseline IndoBERT model achieved strong overall accuracy (0.942) and macro-F1 (0.927) but exhibited significant regional bias, particularly toward Eastern and Sumatran dialects. A re-weighting strategy effectively reduced the regional F1 disparity by 59 percent with minimal accuracy loss, demonstrating the viability of loss-based fairness mitigation. The ABSA-style IndoBERT further improved fairness consistency across dialectal and aspect categories, achieving a macro-F1 of 0.930. Despite these improvements, aspect-level imbalances persisted, indicating that fairness challenges extend beyond demographic representation to semantic coverage. This work contributes an empirical and methodological foundation for ethical NLP evaluation in Bahasa Indonesia, emphasizing fairness auditing, bias mitigation, and responsible deployment of language models in low-resource and linguistically diverse settings.