Claim Missing Document
Check
Articles

Found 2 Documents
Search
Journal : Vortex

Penerapan LightGBM Dengan Kombinasi Ekstraksi Fitur HSV, GLCM dan HOG Untuk Klasifikasi Bunga Sedik, Stevani Gabriella Ayuk; Tentua, Meilany Nonsi
Vortex Vol 7, No 1 (2026)
Publisher : Institut Teknologi Dirgantara Adisutjipto

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.28989/vortex.v7i1.3787

Abstract

This study applies the LightGBM algorithm with a combination of GLCM, HOG, and HSV feature extraction for flower image classification. The dataset used consists of five types of flowers, namely Sunflower, Rose, Tulip, Dandelion, and Daisy, with a total of 4,242 images. Each image undergoes preprocessing and feature extraction of texture, shape, and color before being trained using LightGBM. The results show that the proposed model achieves an accuracy above 70% in distinguishing the five flower classes. This study provides evidence that the combination of GLCM, HOG, and HSV with LightGBM is able to improve classification performance and can serve as a reference for further research in the field of digital image processing
Klasifikasi Ulasan Aplikasi E-KTP Menggunakan Bidirectional Encoder Respresentations from Transformers Rusdi, Alfida Hari; Tentua, Meilany Nonsi
Vortex Vol 7, No 1 (2026)
Publisher : Institut Teknologi Dirgantara Adisutjipto

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.28989/vortex.v7i1.3788

Abstract

The Digital Population Identity application (E-KTP Digital) is part of e-government development aimed at improving the quality of public services. However, user reviews on the Google Play Store are still grouped based on star ratings, so the level of user satisfaction is not yet described in depth. This study aims to classify the sentiment of user reviews of the E-KTP Digital application using the Bidirectional Encoder Representations from Transformers (BERT) method with the Multilingual BERT (mBERT) model. A total of 15,000 reviews were collected from July 3, 2023 to May 31, 2025 and filtered into 1,750 reviews through data cleaning and manual labeling processes. The dataset is divided into training and testing data with ratios of 60:40, 70:30, and 80:20. The training process is conducted using the AdamW optimizer for 4 epochs with a batch size of 16. Model evaluation is planned using accuracy, precision, recall, and F1-score metrics to measure the performance of user review sentiment classification.