Claim Missing Document
Check
Articles

Found 1 Documents
Search

Classification of Stunting Status Using the Naive Bayes Classifier Algorithm with Backward Elimination Feature Selection Pasaribu, Hafni Maya Sari; Abdullah, Dahlan; Rosnita, Lidya
JINAV: Journal of Information and Visualization Vol. 6 No. 1 (2025)
Publisher : PT Mattawang Mediatama Solution

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.35877/454RI.jinav4100

Abstract

Stunting is one of the major health issues affecting toddlers that can influence their physical growth and developmental progress, ultimately impacting their quality of life. It is characterized by a child’s height being below the standard for their age. To address this issue, a method is needed to classify the stunting status in toddlers. This study aims to classify stunting status in toddlers using the Naive Bayes Classifier algorithm, with feature selection performed using the Backward Elimination method to improve classification accuracy.The dataset used in this research was collected in 2023 from the Lueng Daneun Public Health Center, located in Peusangan Simblah Krueng Subdistrict, Bireun District. The dataset includes several features such as age, gender, family income, height, weight, sanitation, clean water access, and formula milk consumption. The application of the backward elimination feature selection method is intended to identify the most significant and relevant features for the target variable. The Naive Bayes Classifier was implemented using the Python programming language. The analysis results indicated that the remaining feature, namely the sanitation condition, had a significant contribution to the classification process. The dataset consisted of 244 entries, divided into 195 training data and 49 testing data with an 80:20 ratio. The initial classification results showed an accuracy of 77.55%, a precision of 60.00%, a recall of 64.29%, and an F1-score of 62.07%. After feature selection, the accuracy increased to 81.63%, precision to 63.16%, recall to 85.71%, and the F1-score slightly improved to 72.73%. These results indicate that feature selection in the Naive Bayes model demonstrates good performance.