Poliyama, Shafiah
Unknown Affiliation

Published : 1 Documents Claim Missing Document
Claim Missing Document
Check
Articles

Found 1 Documents
Search
Journal : Journal of Mathematics, Computation and Statistics (JMATHCOS)

Comparison of Multiple Kernel Learning and Single Kernel Support Vector Machine for Public Opinion Classification Poliyama, Shafiah; Achmad, Novianita; Abdussamad, Siti Nurmardia
Journal of Mathematics, Computations and Statistics Vol. 9 No. 1 (2026): Volume 09 Issue 01 (March 2026)
Publisher : Jurusan Matematika FMIPA UNM

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.35580/qea76e33

Abstract

Abstract. Social media has become a digital public space where public opinion is expressed on various government policies. Social media platform X has become a major venue for openly expressing support and criticism, making it relevant to sentiment analysis. This condition is useful for understanding public perceptions of government policies, such as the Makan Bergizi Gratis (MBG) Programme, which has elicited various public responses since its implementation. Support Vector Machine (SVM) is a widely used method for sentiment classification, but its performance is highly dependent on kernel selection. Using a single kernel type often fails to capture both linear and non-linear patterns in social media texts. Therefore, this study aims to compare the performance of Single Kernel and Multiple Kernel Learning (MKL) in classifying public sentiment from social media X. The research methods included collecting Indonesian language tweets through scraping techniques, text pre-processing, feature extraction using Term Frequency–Inverse Document Frequency (TF–IDF), data division with a ratio of 80:20, and the classification process using SVM with linear kernel, Radial Basis Function (RBF) kernel, and a combination of both through the MKL approach. The results show that MKL based SVM provides the best performance with an accuracy of 93.17%, while Linear and RBF kernels produce accuracies of 91.81% and 92.49%, respectively, on the same dataset and testing scheme.