Rizky Tri Asmono
Teknik Informatika STMIK Swadharma

Published : 1 Documents Claim Missing Document
Claim Missing Document
Check
Articles

Found 1 Documents
Search

PENERAPAN METODE K-NEAREST NEIGHBOR DAN INFORMATION GAIN PADA KLASIFIKASI KINERJA SISWA Tyas Setiyorini; Rizky Tri Asmono
JITK (Jurnal Ilmu Pengetahuan dan Teknologi Komputer) Vol 5 No 1 (2019): JITK Issue August 2019
Publisher : LPPM Nusa Mandiri

Show Abstract | Download Original | Original Source | Check in Google Scholar | Full PDF (1249.697 KB) | DOI: 10.33480/jitk.v5i1.613

Abstract

Education is a very important problem in the development of a country. One way to reach the level of quality of education is to predict student academic performance. The method used is still using an ineffective way because evaluation is based solely on the educator's assessment of information on the progress of student learning. Information on the progress of student learning is not enough to form indicators in evaluating student performance and helping students and educators to make improvements in learning and teaching. K-Nearest Neighbor is an effective method for classifying student performance, but K-Nearest Neighbor has problems in terms of large vector dimensions. This study aims to predict the academic performance of students using the K-Nearest Neighbor algorithm with the Information Gain feature selection method to reduce vector dimensions. Several experiments were conducted to obtain an optimal architecture and produce accurate classifications. The results of 10 experiments with k values ​​(1 to 10) in the student performance dataset with the K-Nearest Neighbor method showed the largest average accuracy of 74.068 while the K-Nearest Neighbor and Information Gain methods obtained the highest average accuracy of 76.553. From the results of these tests it can be concluded that Information Gain can reduce vector dimensions, so that the application of K-Nearest Neighbor and Information Gain can improve the accuracy of the classification of student performance better than using the K-Nearest Neighbor method.