Claim Missing Document
Check
Articles

Found 23 Documents
Search

Comparison of Template Matching Algorithm and Feature Extraction Algorithm in Sundanese Script Transliteration Application using Optical Character Recognition Gerhana, Yana Aditia; Atmadja, Aldy Rialdy; Padilah, Muhamad Farid
JOIN (Jurnal Online Informatika) Vol 5, No 1 (2020)
Publisher : Department of Informatics, UIN Sunan Gunung Djati Bandung

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.15575/join.v5i1.580

Abstract

The phenomenon that occurs in the area of West Java Province is that the people do not preserve their culture, especially regional literature, namely Sundanese script, in this digital era there is research on Sundanese script combined with applications using Feature Extraction algorithm, but there is no comparison with other algorithms and cannot recognize Sundanese numbers. Therefore, to develop the research a Sundanese script application was made with the implementation of OCR (Optical Character Recognition) using the Template Matching algorithm and the Feature Extraction algorithm that was modified with the pre-processing stages including using luminosity and thresholding algorithms, from the two algorithms compared to the accuracy and time values the process of recognizing digital writing and handwriting, the results of testing digital writing algorithm Matching algorithm has a value of 87% word recognition accuracy with 236 ms processing time and 97.6% character recognition accuracy with 227 ms processing time, Feature Extraction has 98% word recognition accuracy with 73.6 ms processing time and 100% character recognition accuracy with 66 ms processing time, for handwriting recognition in feature extraction character recognition has 83% accuracy and 75% word recognition , while template matching in character recognition has an accuracy of 70% and word recognition has an accuracy of 66%.
Comparison of search algorithms in Javanese-Indonesian dictionary application Yana Aditia Gerhana; Nur Lukman; Arief Fatchul Huda; Cecep Nurul Alam; Undang Syaripudin; Devi Novitasari
TELKOMNIKA (Telecommunication Computing Electronics and Control) Vol 18, No 5: October 2020
Publisher : Universitas Ahmad Dahlan

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.12928/telkomnika.v18i5.14882

Abstract

This study aims to compare the performance of Boyer-Moore, Knuth morris pratt, and Horspool algorithms in searching for the meaning of words in the Java-Indonesian dictionary search application in terms of accuracy and processing time. Performance Testing is used to test the performance of algorithm implementations in applications. The test results show that the Boyer Moore and Knuth Morris Pratt algorithms have an accuracy rate of 100%, and the Horspool algorithm 85.3%. While the processing time, Knuth Morris Pratt algorithm has the highest average speed level of 25ms, Horspool 39.9 ms, while the average speed of the Boyer Moore algorithm is 44.2 ms. While the complexity test results, the Boyer Moore algorithm has an overall number of n 26n2, Knuth Morris Pratt and Horspool 20n2 each.
Breakdown film script using parsing algorithm Agung Wahana; Diena Rauda Ramdania; Dhanis Al Ghifari; Ichsan Taufik; Faiz M. Kaffah; Yana Aditia Gerhana
TELKOMNIKA (Telecommunication Computing Electronics and Control) Vol 18, No 4: August 2020
Publisher : Universitas Ahmad Dahlan

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.12928/telkomnika.v18i4.14849

Abstract

Breakdown script is a breakdown of the scenario into parts that describe each detail of the scene for shooting. The scenario is broken down into more detailed parts using the parsing algorithm. The film script used is a script in Bahasa Indonesia. The process starts from the film script file/scenario in FBX format uploaded to the website then is solved using a parsing algorithm into film elements such as cast members, extras, props, costumes, makeup, vehicles, stunts, special effects, music and sound. The results of this breakdown into sheets according to film elements. The purpose of this research is to produce breakdown sheets from film scripts according to film elements. The parsing algorithm test results showed the correct results of 12 scenes out of 19 scenes.
Game and Application Purchasing Patterns on Steam using K-Means Algorithm Aulia, Salman Fauzan Fahri; Gerhana, Yana Aditia; Nurlatifah, Eva
Jurnal Sisfokom (Sistem Informasi dan Komputer) Vol. 13 No. 3 (2024): NOVEMBER
Publisher : ISB Atma Luhur

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.32736/sisfokom.v13i3.2214

Abstract

Online games are visual games that utilize the internet or LAN networks. With the growth of the gaming industry, platforms like Steam offer a wide variety of games, making it challenging for users to decide which game to play. This study employs the Cross-Industry Standard Process for Data Mining (CRISP-DM) methodology to address this issue by understanding user preferences. The k-means algorithm clusters game data based on similar characteristics, helping users and developers identify the most popular game types. Data sourced from Kaggle, obtained through the Steam API and Steamspy, consists of 85,103 entries. A normalization process is applied to enhance calculation accuracy. The elbow method determines the optimal number of clusters, resulting in three clusters from the k-means algorithm. The evaluation includes the silhouette coefficient, which measures the proximity between variables, and precision purity, which compares labels by assigning a value of 1 (actual) or 0 (false). The study finds an average silhouette coefficient of 0.345 and a precision purity value of 0.734, indicating that the k-means algorithm performs optimally based on the precision purity metric. The findings reveal that free-to-play games are the most popular among users, while the "Animation & Modelling" category is the most expensive based on price comparisons
Vector space model, term frequency-inverse document frequency with linear search, and object-relational mapping Django on hadith data search Taufik, Ichsan; Agra, Agra; Gerhana, Yana Aditia
Computer Science and Information Technologies Vol 5, No 3: November 2024
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/csit.v5i3.p306-314

Abstract

For Muslims, the Hadith ranks as the secondary legal authority following the Quran. This research leverages hadith data to streamline the search process within the nine imams’ compendium using the vector space model (VSM) approach. The primary objective of this research is to enhance the efficiency and effectiveness of the search process within Hadith collections by implementing pre-filtering techniques. This study aims to demonstrate the potential of linear search and Django object-relational mapping (ORM) filters in reducing search times and improving retrieval performance, thereby facilitating quicker and more accurate access to relevant Hadiths. Prior studies have indicated that VSM is efficient for large data sets because it assigns weights to every term across all documents, regardless of whether they include the search keywords. Consequently, the more documents there are, the more protracted the weighting phase becomes. To address this, the current research pre-filters documents prior to weighting, utilizing linear search and Django ORM as filters. Testing on 62,169 hadiths with 20 keywords revealed that the average VSM search duration was 51 seconds. However, with the implementation of linear and Django ORM filters, the times were reduced to 7.93 and 8.41 seconds, respectively. The recall@10 rates were 79% and 78.5%, with MAP scores of 0.819 and 0.814, accordingly.
Enhancing Abstractive Multi-Document Summarization with Bert2Bert Model for Indonesian Language Muharam, Aldi Fahluzi; Gerhana, Yana Aditia; Maylawati, Dian Sa'adillah; Ramdhani, Muhammad Ali; Rahman, Titik Khawa Abdul
JISKA (Jurnal Informatika Sunan Kalijaga) Vol. 10 No. 1 (2025): January 2025
Publisher : UIN Sunan Kalijaga Yogyakarta

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.14421/jiska.2025.10.1.110-121

Abstract

This study investigates the effectiveness of the proposed Bert2Bert and Bert2Bert+Xtreme models in improving abstract multi-document summarization for Indonesians. This research uses the transformer model to develop the proposed Bert2Bert and Bert2Bert+Xtreme models. This research utilizes the Liputan6 data set, which comprises news data along with summary references spanning 10 years from October 2000 to October 2010, and is commonly used in many automatic text summarization studies. The model evaluation results using ROUGE-1, ROUGE-2, ROUGE-L, and BERTScore indicate that the proposed model exhibits a slight improvement over previous research models, with Bert2Bert performing better than Bert2Bert+Xtreme. Despite the challenges posed by limited reference summaries for Indonesian documents, content-based analysis using readability metrics, including FKGL, GFI, and Dwiyanto Djoko Pranowo, revealed that the summaries produced by Bert2Bert and Bert2Bert+Xtreme are at a moderate readability level, meaning they are suitable for mature readers and align with the news portal’s target audience.
Development of Religious Moderation Learning Media Based on Augmented Reality Using Fast Corner Detection Algorithm Rahayu, Ayu Puji; Somantri, E. Aris; Gerhana, Yana Aditia
Khazanah Pendidikan Islam Vol. 7 No. 1 (2025): Khazanah Pendidikan Islam
Publisher : UIN Sunan Gunung Djati Bandung

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.15575/kpi.v7i1.40835

Abstract

The religious moderation program aims to streamline religious life in diverse Indonesian society to create a religious society and nation that lives in peace, tolerance, and harmony. Internalization of religious moderation must be implemented in the madrasah environment to provide a strong understanding to students so that they can understand and apply the values ​​of religious moderation in everyday life. Therefore, this study aims to develop Augmented Reality (AR)-based religious moderation learning media for madrasa students. The research method used is a modified System development life cycle (SDLC) model. The data collection technique in this study is a test. Pretest-posttest tests the product's effectiveness on learning outcomes, and observations and interviews are conducted to strengthen field data. At the same time, the participants in this study were elementary madrasah (MI) students in West Bandung Regency and Garut Regency, with 160 participants from seven schools. The selection of locations is based on the researcher's affordability and access. The results of this study indicate that the use of augmented reality (AR)-based religious moderation learning media has effectively improved student learning outcomes in religious moderation learning significantly.  The results of the difference test show an Asymp. Sig. (2-tailed) value of 0.000, indicating that Ho is rejected, and Ha is accepted. This means there is a significant difference in student learning outcomes before and after the treatment using Augmented Reality (AR) learning media, with an average pre-test score of 64.94 and an average post-test score of 93.50. Therefore, this study concludes that Augmented Reality learning media can improve student learning outcomes.
Klasifikasi Penyakit Daun Kopi Arabika Berbasis Gambar Menggunakan Model Convolutional Neural Networks DenseNet121 Solehudin, Muhammad Alwy; Gerhana, Yana Aditia; Taufik, Ichsan
Journal of Information System Research (JOSH) Vol 6 No 2 (2025): Januari 2025
Publisher : Forum Kerjasama Pendidikan Tinggi (FKPT)

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.47065/josh.v6i2.6407

Abstract

Detection of Arabica coffee leaf diseases is crucial for improving the quality and yield of coffee crops. This study aims to apply the DenseNet121 Convolutional Neural Network model to identify three types of diseases on Arabica coffee leaves, namely Rust, Phoma, and Miner. The data used consists of images of Arabica coffee leaves, which are divided into training, validation, and test sets. The model was trained using the Adamax optimizer with hyperparameters such as a maximum of 30 epochs and a batch size of 32. During training, the model achieved a validation accuracy of 98.86% before being stopped by the early stopping callback at epoch 28 to prevent overfitting. Model evaluation using a confusion matrix resulted in 97% accuracy on the test data, with excellent precision, recall, and F1-score values for most categories, particularly for the Healthy, Miner, and Phoma classes. The Rust class showed lower recall due to data imbalance in the test set. The results of this study demonstrate that the DenseNet121 model is reliable for detecting diseases on Arabica coffee leaves with high accuracy and provides an important contribution to the technology of plant health monitoring, which can assist farmers in early detection and improve coffee crop productivity.
Implementasi Model CNN ResNet50V2 untuk Klasifikasi Pneumonia pada Citra X-Ray Anwar, Muhammad Afian; Gerhana, Yana Aditia; Syaripudin, Undang
SMATIKA JURNAL : STIKI Informatika Jurnal Vol 15 No 01 (2025): SMATIKA Jurnal : STIKI Informatika Jurnal
Publisher : LPPM STIKI MALANG

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.32664/smatika.v15i01.1538

Abstract

The utilization of technology to build models that can classify pneumonia medical images automatically is needed for early diagnosis. This study aims to implement a Convolutional Neural Network (CNN) model with ResNet50V2 architecture that has been proven to have high accuracy in medical image classification. The model adopts a deep and efficient residual architecture, which facilitates deeper training of the model without suffering from vanishing gradient problem. This study went through four main stages: pneumonia and normal X-ray image data collection, data pre-processing (including set division, transformation, and augmentation), modeling using CNN with hyperparameter tuning, and model evaluation. Evaluation was performed using accuracy, F1-score, and Confusion Matrix metrics. The CNN model with ResNet50V2 as the backbone achieved 97% accuracy, showing excellent performance in differentiating between pneumonia and normal despite a small amount of misclassification. Although this model showed impressive results, challenges such as potential misclassification in cases with unclear or ambiguous images remain. Compared to previous approaches, this model offers advantages in accuracy and processing efficiency thanks to the use of a deeper and more sophisticated ResNet50V2. These advantages are expected to improve the precision of automated diagnosis in future medical applications.
Game and Application Purchasing Patterns on Steam using K-Means Algorithm Aulia, Salman Fauzan Fahri; Gerhana, Yana Aditia; Nurlatifah, Eva
Jurnal Sisfokom (Sistem Informasi dan Komputer) Vol. 13 No. 3 (2024): NOVEMBER
Publisher : ISB Atma Luhur

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.32736/sisfokom.v13i3.2214

Abstract

Online games are visual games that utilize the internet or LAN networks. With the growth of the gaming industry, platforms like Steam offer a wide variety of games, making it challenging for users to decide which game to play. This study employs the Cross-Industry Standard Process for Data Mining (CRISP-DM) methodology to address this issue by understanding user preferences. The k-means algorithm clusters game data based on similar characteristics, helping users and developers identify the most popular game types. Data sourced from Kaggle, obtained through the Steam API and Steamspy, consists of 85,103 entries. A normalization process is applied to enhance calculation accuracy. The elbow method determines the optimal number of clusters, resulting in three clusters from the k-means algorithm. The evaluation includes the silhouette coefficient, which measures the proximity between variables, and precision purity, which compares labels by assigning a value of 1 (actual) or 0 (false). The study finds an average silhouette coefficient of 0.345 and a precision purity value of 0.734, indicating that the k-means algorithm performs optimally based on the precision purity metric. The findings reveal that free-to-play games are the most popular among users, while the "Animation & Modelling" category is the most expensive based on price comparisons