S. Kuba, Muhammad Syafaat
Unknown Affiliation

Published : 4 Documents Claim Missing Document
Claim Missing Document
Check
Articles

Found 4 Documents
Search

Integrating multi-criteria decision making and public sentiment analysis for sustainable urban green space planning S. Kuba, Muhammad Syafaat; Faisal, Muhammad; Nurnawaty, Nurnawaty; Abdul Rahman, Titik Khawa; Syamsuri, Andi Makbul; Hayat, Muhyiddin AM; Bakti, Rizki Yusliana
Bulletin of Electrical Engineering and Informatics Vol 15, No 2: April 2026
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/eei.v15i2.11168

Abstract

Sustainable planning of green open spaces (GOS) requires decision-making models that combine expert evaluation with public input. This study proposes a novel hybrid framework that integrates multi-criteria group decision making (MCGDM) with public sentiment analysis to support community-based and data-driven urban planning. The workflow consists of evaluating 25 community-proposed GOS locations using stepwise weight assessment ratio analysis (SWARA) for criteria weighting and MABAC-BORDA for multi-criteria ranking, resulting in 11 feasible alternatives. To incorporate community perspectives, a term frequency-inverse document frequency-support vector machine (TF-IDF–SVM) classifier was applied to 1500 public comments, where SVM achieved the highest accuracy (0.80–0.96). The integrated approach improves ranking stability, reduces decision ambiguity, and strengthens alignment between expert judgment and community sentiment. This study contributes a transparent, participatory decision-support model that unifies MCGDM and sentiment analysis to enhance the effectiveness of sustainable GOS planning.
IMPLEMENTASI SISTEM DETEKSI PRODUK BOIKOT BERBASIS WEBSITE REAL-TIME MENGGUNAKAN METODE YOLOv10 Nur Rahman, Ahmad; Habi Talib, Emil Agusalim; Rachman, Fahrim Irhamna; Bakti, Rizki Yusliana; Faisal, Muhammad; S. Kuba, Muhammad Syafaat
PROGRESS Vol 18 No 1 (2026): April
Publisher : P3M STMIK Profesional Makassar

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.56708/progres.v18i1.525

Abstract

Manual identification ofboycott products remains a challenge for the public due to limited access to information and the complexity of brand affiliations. This study aims to develop a real-time, website-based boycott product detection system using the You Only Look Once version 10 (YOLOv10) algorithm. The dataset consists of images of food and beverage product packaging collected from various online sources, annotated using the bounding box method, and classified into five categories. The model was trained and tested using separate test data, while performance evaluation was conducted using a confusion matrix with precision, recall, and f1-score metrics. In addition, functional testing of the system was performed using the Black Box Testing method. The result indicate that the YOLOv10 model is capable of detecting boycott product with good performance and can be effectively integrated into a real-time web-based system. The proposed system is expected to assist users in identifying boycott products more quickly and accurately.
KLASIFIKASI TANAMAN OBAT TRADISIONAL BERBASIS CITRA BUAH DAN DAUN Kusumawardani, Nurul; Danuputri, Chyquitha; Darniati; Faisal, Muhammad; A.M Hayat, Muhyiddin; S. Kuba, Muhammad Syafaat; Anggreani, Desi
PROGRESS Vol 18 No 1 (2026): April
Publisher : P3M STMIK Profesional Makassar

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.56708/progres.v18i1.534

Abstract

Indonesia is a megabiodiversity country with extensive use of traditional medicinal plants; however, plant identification in natural environments remains largely manual and error-prone. Recent advances in deep learning, particularly Vision Transformer (ViT), provide a promising solution by effectively capturing global spatial features for image classification. This study applies a ViT-Base/16 model to automatically classify fruit and leaf images of Indonesian medicinal plants. The dataset comprises 1,000 field-collected images from Galung Village, West Sulawesi, covering 20 classes (10 medicinal and 10 non-medicinal plants). The model was fine-tuned using the AdamW optimizer with a learning rate of 2×10⁻⁵ and trained for 30 epochs with cosine annealing. The proposed approach achieved high performance, with 99.33% accuracy, 99.41% precision, 99.33% recall, and a 99.33% F1-score, while binary classification between medicinal and non-medicinal plants reached 100% accuracy. The system was deployed as a Flask-based web application, demonstrating reliable functionality and practical response times. Overall, the results confirm the effectiveness of Vision Transformer for medicinal plant classification under natural conditions and highlight its potential to support digital documentation, education, and the preservation of local ethnobotanical knowledge.
Student Emotion Recognition from Low-Quality Videos Using Multimodal Deep Learning TAIBA, ANDI MAWADDA TAIBA MAWADDA; Bakti, Rizki Yusliana; Faisal, Muhammad; S. Kuba, Muhammad Syafaat; Anas, Lukman; H. T, Emil Agusalim; Rahman, Fahrim I.
JURNAL INFOTEL Vol 18 No 1 (2026): February
Publisher : LPPM INSTITUT TEKNOLOGI TELKOM PURWOKERTO

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.20895/infotel.v18i1.1523

Abstract

Emotion recognition plays a critical role in intelligent e-learning systems by enabling adaptive feedback and timely pedagogical interventions based on students’ affective states. However, most existing approaches rely heavily on visual facial cues, which are highly vulnerable to real-world conditions such as low-resolution video, partial facial occlusion, poor lighting, and unstable network connections commonly encountered in online learning environments. These limitations significantly degrade the performance of unimodal deep learning models. To address this challenge, this study proposes a multimodal deep learning framework for student emotion recognition that is robust to low-quality and occluded video input. The proposed model integrates visual and audio modalities through a hybrid architecture, combining a lightweight CNN-based visual feature extractor with a BiLSTM-based speech emotion model. An attention-based fusion mechanism is employed to adaptively weight cross-modal features, allowing the system to compensate for degraded or missing visual information using complementary acoustic cues. Experimental evaluations are conducted using publicly available datasets representative of realistic online learning scenarios, including DAiSEE and RAVDESS, with additional augmentation to simulate varying levels of occlusion and video degradation. The results demonstrate that the multimodal approach consistently outperforms unimodal baselines, particularly under high occlusion conditions, while maintaining computational efficiency suitable for near real-time deployment. These findings confirm that multimodal fusion with attention mechanisms provides a more resilient and practical solution for emotion-aware e-learning systems operating under non-ideal input conditions