The success of food security programs faces various challenges. Most of the available data is in the form of unstructured text reports, news, and policy documents. The BERT (Bidirectional Encoder Representations from Transformers) model allows the system to read reports and news by considering the relationship between words in sentences. Compared to Support Vector Machines (SVMs) that rely on numerical data. The dataset is expanded to improve the generalization of the IndoBERT Classifier. There are 6 commodity data and 3 labels used in IndoBERT Modeling, represented by a 768-dimensional feature vector resulting in Accuracy 0.8333 (83.33%) indicating 5 correct predictions, with one misclassification. Tuned Min-Max on Support Vector Machines (SVM) is used in each dimension to find the optimal hyperplane contributing. The feature matrix x with size (39,10) and the target variable y with size (39) show Accuracy 0.92 (92.0%) that the data division process maintains the class proportion consistently. SVM performed better than IndoBERT. Classification evaluation of the models showed IndoBERT with Accuracy 83% and SVM Sccuracy 87%.
Copyrights © 2025