Claim Missing Document
Check
Articles

Found 38 Documents
Search

Implementation of Convolutional Neural Networks for Eyeglass Product Image Retrieval: A Comparative Study of ResNet-50 and MobileNetV2 Taufik, Handri; Anggai, Sajarwo; Taryo, Taswanda
Jurnal Ilmiah Multidisiplin Indonesia (JIM-ID) Vol. 5 No. 02 (2026): Jurnal Ilmiah Multidisplin Indonesia (JIM-ID), February 2026
Publisher : Sean Institute

Show Abstract | Download Original | Original Source | Check in Google Scholar

Abstract

The increasing similarity among eyewear product designs poses significant challenges for conventional text-based search systems, highlighting the need for effective Content-Based Image Retrieval (CBIR) approaches. This study proposes a CNN-based CBIR system for eyeglass frame and sunglasses retrieval, employing a comparative analysis of ResNet50 and MobileNetV2 as feature extractors. The dataset comprises 4,500 gallery images and 300 query images, with feature similarity measured using cosine similarity and accelerated through FAISS indexing. Experimental results indicate that ResNet50 achieves higher recall (0.0622), demonstrating its ability to capture more complex visual features. In contrast, MobileNetV2 provides superior ranking performance, achieving an mAP of 0.6091 and an MRR of 0.1427, outperforming ResNet50 (mAP of 0.5019 and MRR of 0.0713), while also reducing feature extraction time (0.1348 s versus 0.2023 s). These findings suggest that ResNet50 is more suitable for accuracy-oriented retrieval tasks, whereas MobileNetV2 is better suited for real-time and resource-constrained applications.
Analysis and Evaluation of Qur’an Translation Topics Using Classical, Neural, and Transformer-Based Topic Modelling Kurnia, Akhmad Rinaldy; Anggai, Sajarwo; Handayani, Murni
Jurnal Ilmiah Multidisiplin Indonesia (JIM-ID) Vol. 5 No. 02 (2026): Jurnal Ilmiah Multidisplin Indonesia (JIM-ID), February 2026
Publisher : Sean Institute

Show Abstract | Download Original | Original Source | Check in Google Scholar

Abstract

Topic modelling is an important approach for extracting latent thematic structures from text corpora, including religious texts that are characterized by dense semantics and short documents. This study aims to compare the performance of several topic modelling methods Latent Dirichlet Allocation (LDA), Biterm Topic Model (BTM), Combined Topic Model (CombinedTM), and BERTopic in extracting topics from the Indonesian translation of the Qur’an. The dataset consists of 6,236 verses, with each verse treated as a single document. Topic quality is evaluated using two main metrics: coherence score (C_v) and topic diversity. The experimental results show that CombinedTM achieves the highest coherence score, with a maximum value of approximately 0.52 at K = 10 topics, followed by BTM, which demonstrates relatively high and stable coherence scores (around 0.50) across certain topic number variations. LDA yields the highest topic diversity, exceeding 0.90, but with lower coherence scores compared to the other models, indicating its limitations in preserving semantic coherence in short texts. Meanwhile, BERTopic exhibits consistently high topic diversity (0.85–0.88) across different numbers of topics, although its bag-of-words–based coherence scores do not always increase significantly. These findings highlight that the choice of topic modelling method should be aligned with the characteristics of the corpus and the objectives of thematic analysis, particularly in the context of short-form religious texts.
Analisis Stok Barang Menggunakan Algoritma K-Nearest Neighbor Dan Naïve Bayes Untuk Meningkatkan Efisiensi Persediaan Barang Retail Pada PT. XXX Restu Putra, Catur; Anggai, Sajarwo; Susanto, Agung Budi
Jurnal Ilmu Komputer Vol 4 No 1 (2026): Jurnal Ilmu Komputer (Edisi Januari 2026)
Publisher : Universitas Pamulang

Show Abstract | Download Original | Original Source | Check in Google Scholar

Abstract

Efficient inventory management is a strategic necessity for retail industries that operate under highly fluctuating demand conditions. PT XXX continues to experience inaccuracies in determining stock requirements because the analysis is still carried out manually using a three-month average sales calculation. This approach is unable to capture actual warehouse variations, resulting in frequent overstock and understock conditions. This study develops a machine learning–based stock classification model using the K-Nearest Neighbor (K-NN) and Naïve Bayes algorithms by utilizing key operational warehouse variables, including average sales, ending stock, and Days of Inventory (DOI). The dataset consists of 4,324 records from November 2024 to October 2025 and was processed using Orange Data Mining. Performance evaluation was conducted using accuracy, precision, recall, and confusion matrix. The results show that K-NN achieved the best performance, with 96.80% accuracy in the prediction model and 93.00% in the test & score evaluation, outperforming Naïve Bayes, which achieved approximately 90%. The study also produced a two-level classification mapping stock status (High/Low) and warehouse recommendations (Low/Enough/Excess) which revealed a significant imbalance between High and Low categories. These findings demonstrate that machine learning–based classification methods can enhance stock assessment accuracy and support more adaptive and efficient restocking decisions in retail inventory management
Analisis BERT dan LDA Untuk Ekstraksi Kebijakan Ekonomi Presiden Prabowo Subianto Muhammad Najwah; Sajarwo Anggai; Sudarno
Jurnal Ilmu Komputer Vol 4 No 1 (2026): Jurnal Ilmu Komputer (Edisi Januari 2026)
Publisher : Universitas Pamulang

Show Abstract | Download Original | Original Source | Check in Google Scholar

Abstract

Economic policies introduced at the beginning of President Prabowo Subianto’s administration have generated diverse public discourses reflected in online media coverage. The large volume of textual data necessitates computational approaches to extract information systematically. This study aims to identify, label, and compare major economic policy topics using topic modeling techniques, namely Latent Dirichlet Allocation (LDA) and BERTopic.The dataset consists of 1,000 economic news articles collected through web scraping from an online news portal. Text preprocessing includes normalization, case folding, cleaning, tokenization, and lemmatization. LDA was implemented using a TF-IDF representation and evaluated with the Coherence Score (c_v). BERTopic employed IndoBERT embeddings, UMAP for dimensionality reduction, and HDBSCAN for hierarchical clustering, with evaluation based on topic coherence and semantic interpretability. The results show that LDA generated eight main topics with a Coherence Score (c_v) of 0.61, indicating moderate performance but limited semantic representation, leading to overlapping topics. In contrast, BERTopic produced nine main topics with a higher Coherence Score (c_v) of 0.72 and clearer, more contextual topic labels, including fiscal policy, energy, capital markets, and economic stimulus. Overall, BERTopic outperformed LDA in extracting and labeling economic policy topics due to its superior ability to capture semantic context and form stable topic clusters.
Analisis Topik Dan Sentimen Berbasis Algoritma Latent Dirichlet Allocation (LDA) Dan Bidirectional Encoder Representations From Transformers (BERT): Studi Kasus Ulasan Pelanggan Pada E-Commerce ruparupa.com Permana, Surya; Sajarwo Anggai; Taswanda Taryo
Jurnal Ilmu Komputer Vol 4 No 1 (2026): Jurnal Ilmu Komputer (Edisi Januari 2026)
Publisher : Universitas Pamulang

Show Abstract | Download Original | Original Source | Check in Google Scholar

Abstract

The growth of e-commerce in Indonesia has led to an increasing volume of customer reviews containing vital information. These reviews are generally in the form of unstructured text, necessitating text analysis methods to extract meaningful insights. This study aims to analyze topics and sentiments in customer reviews of the e-commerce platform ruparupa.com by utilizing Latent Dirichlet Allocation (LDA) and Bidirectional Encoder Representations from Transformers (BERT) algorithms. The LDA algorithm is used to identify the main topics frequently discussed by customers, while BERT is employed to classify review sentiments into positive, negative, and neutral categories. By using Lexicon-Based and VADER as an automatic labeling mechanism (auto-labeling), the preprocessing stage includes cleaning, case folding, and stemming using the Sastrawi library to ensure the quality of the input data. The LDA algorithm is implemented to extract latent topic structures, which are then mapped into five main categories: Price, Application, Service, Product Quality, and Delivery. Furthermore, the DistilBERT model is trained through a fine-tuning process using the AdamW optimizer for 3 epochs. The sentiment analysis results indicate that the model demonstrates very strong performance, as reflected by high accuracy and consistently optimal precision, recall, and F1-score across all sentiment classes. This customer sentiment distribution reflects the level of user satisfaction with the services of ruparupa.com. The combination of LDA and BERT methods is proven effective in providing an overview of key issues and customer perceptions
SOSIALISASI ETIKA KECERDASAN ARTIFISIAL DAN PEMANFAATAN DALAM BIDANG PEMBELAJARAN DI ORGANISASI MASYARAKAT GENERASI REMAJA (GEMA) Anggai, Sajarwo; Musyafa, Ahmad; Toyib, Wildan
KOMMAS: Jurnal Pengabdian Kepada Masyarakat Vol. 7 No. 1 (2026): KOMMAS: JURNAL PENGABDIAN KEPADA MASYARAKAT
Publisher : KOMMAS: Jurnal Pengabdian Kepada Masyarakat

Show Abstract | Download Original | Original Source | Check in Google Scholar

Abstract

Di era kemajuan teknologi, termasuk di bidang pendidikan, kecerdasan artifisial (AI) semakin penting. Namun, masih ada beberapa hal yang perlu ditangani sebagai tindak lanjut penerapan teknologi AI, terutama pemahaman dasar, etika, dan penerapan praktis. Organisasi GEMA belum pernah mengikuti kursus maupun sosialisasi mengenai etika kecerdasan buatan. Sebagai tindak lanjut, Tim PKM S2 Teknik Informatika Universitas Pamulang menyelenggarakan kegiatan yang berjudul "Sosialisasi Etika Kecerdasan Artifisial dan Pemanfaatannya dalam Bidang Pembelajaran" pada tanggal 19 Oktober 2025. Filosofi moral, cara menggunakan ChatGPT untuk menulis ilmiah, dan perkembangan kecerdasan buatan akan dibahas dalam kegiatan ini. Hasil evaluasi yang dilakukan terhadap peserta sosialisasi menunjukkan tingkat penerimaan yang dapat diterima secara positif, baik dari segi penyelenggaraan, materi, instruktur dan daya kreatif. Peserta diharapkan belajar menggunakan AI dengan bijak, kreatif, dan bertanggung jawab melalui pendekatan interaktif.
Performance Evaluation of ARIMA, LSTM, and Hybrid ARIMA–LSTM Models for Daily Solar Energy Prediction in Bali Aslimah; Anggai, Sajarwo; Tukiyat
Jurnal Teknologi Informatika dan Komputer Vol. 12 No. 1 (2026): Jurnal Teknologi Informatika dan Komputer
Publisher : Universitas Mohammad Husni Thamrin

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.37012/jtik.v12i1.3283

Abstract

Solar energy is one of the most promising renewable energy sources in Indonesia, particularly in Bali, which has relatively high solar irradiance throughout the year. However, daily variability in solar radiation caused by weather conditions and atmospheric factors leads to fluctuations in solar energy production, making accurate forecasting essential for effective energy planning. This study aims to evaluate the performance of the Autoregressive Integrated Moving Average (ARIMA), Long Short-Term Memory (LSTM), and hybrid ARIMA–LSTM models in forecasting daily solar energy at the Jembrana Climatological Station, Bali. The dataset consists of 10-minute solar radiation observations obtained from an Automatic Weather Station (AWS) for the period January 2023 to September 2025, which were aggregated into daily solar energy values expressed in kWh/m². Data preprocessing included missing value handling, outlier correction, normalization, and an 80:20 split between training and testing datasets. Model performance was evaluated using Root Mean Square Error (RMSE), Mean Absolute Error (MAE), and Mean Absolute Percentage Error (MAPE). The results show that the hybrid ARIMA–LSTM model achieved the best performance, with an RMSE of 0.960 kWh/m², MAE of 0.771 kWh/m², and MAPE of 22.245%, outperforming both the ARIMA and LSTM models. These findings indicate that the hybrid approach is more effective in capturing both linear and nonlinear characteristics of daily solar energy time series.
Comparison of Faster R-CNN and YOLO v12 on Passport Text Extraction Based on Optical Character Recognition Samosir, Masniari; Anggai, Sajarwo; Taryo, Taswanda
Jurnal Teknologi Informatika dan Komputer Vol. 12 No. 1 (2026): Jurnal Teknologi Informatika dan Komputer
Publisher : Universitas Mohammad Husni Thamrin

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.37012/jtik.v12i1.3307

Abstract

Current developments in information technology are driving the need for digitalization of official identity documents, including passports, to improve service efficiency and reduce reliance on manual processes. The digitalization of official identity documents such as passports still faces efficiency and accuracy challenges due to manual data entry processes. This study aims to compare the performance of Faster R-CNN and YOLO v12 in an automatic text extraction system based on Optical Character Recognition (OCR). The research employed an experimental method with a comparative approach using 31 preprocessed passport images. YOLO v12 was integrated with EasyOCR, while Faster R-CNN was combined with a PyTorch-based OCR module. The evaluation metrics included mAP, Character Accuracy Rate (CAR), Word Error Rate (WER), F1-score, and inference time. The results indicate that YOLO v12 outperforms Faster R-CNN in object detection, achieving an mAP@50 of 95.0% and mAP@50–95 of 90.0%, compared to 93.0% and 89.0%, respectively. In terms of text extraction accuracy, Faster R-CNN achieved a CAR of 50.01% and an F1-score of 55.75%, slightly higher than YOLO v12 with a CAR of 47.72% and an F1-score of 53.84%. However, YOLO v12 produced a lower WER and faster inference time of 2.4202 seconds (0.45 FPS). The findings suggest that YOLO v12 excels in efficiency and detection performance, while Faster R-CNN performs better in specific text extraction accuracy.