Claim Missing Document
Check
Articles

Found 39 Documents
Search

Brahmi Script Classification using VGG16 Architecture Convolutional Neural Network Vincen, Vincen; Samsuryadi, Samsuryadi
Computer Engineering and Applications Journal Vol 11 No 2 (2022)
Publisher : Universitas Sriwijaya

Show Abstract | Download Original | Original Source | Check in Google Scholar | Full PDF (283.911 KB) | DOI: 10.18495/comengapp.v11i2.407

Abstract

Many Indonesians have difficulty reading and learning the Brahmi script. Solving these problems can be done by developing software. Previous research has classified the Brahmi script but has not had an output that matches the letter. Therefore, letter classification is carried out as part of the process of recognizing Brahmi script. This study uses the Convolutional Neural Network (CNN) method with the VGG16 architecture for classifying Brahmi script writing. Training results from various amounts of image data. Smooth model. The requested image data is a 224x224 binary image. This study has the highest quality, accuracy is 96%, highest recall is 98% and highest precision is 98%.
American Sign Language Translation to Display the Text (Subtitles) using a Convolutional Neural Network Ramadhan, Muhammad Fajar; Samsuryadi, Samsuryadi; Primanita, Anggina
Engineering, MAthematics and Computer Science Journal (EMACS) Vol. 6 No. 3 (2024): EMACS
Publisher : Bina Nusantara University

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.21512/emacsjournal.v6i3.11904

Abstract

Sign language is a harmonious combination of hand gestures, postures, and facial expressions. One of the most used and also the most researched Sign Language is American Sign Language (ASL) because it is easier to implement and also more common to apply on a daily basic. More and more research related to American Sign Language aims to make it easier for the speech impaired to communicate with other normal people. Now, American Sign Language research is starting to refer to the vision of computers so that everyone in the world can easily understand American Sign Language through machine learning. Technology continues to develop sign language translation, especially American Sign Language using the Convolutional Neural Network. This study uses the Densenet201 and DenseNet201 PyTorch architectures to translate American Sign Language, then display the translation into written form on a monitor screen. There are 4 comparisons of data splits, namely 90:10, 80:20, 70:30, and 60:30. The results showed the best results on DenseNet201 PyTorch in the train-test dataset comparison of 70:30 with an accuracy of 0.99732, precision of 0.99737, recall (sensitivity) of 0.99732, specificity of 0.99990, F1-score of 0.99731, and error of 0.00268. The results of the translation of American Sign Language into written form were successfully carried out by performance evaluation using ROUGE-1 and ROUGE-L resulting in a precision of 0.14286, Recall (sensitivity) 0.14286, and F1-score.
Classification of palm oil fruit ripeness based on AlexNet deep Convolutional Neural Network Kurniawan, Rudi; Samsuryadi, Samsuryadi; Mohamad, Fatma Susilawati; Wijaya, Harma Oktafia Lingga; Santoso, Budi
SINERGI Vol 29, No 1 (2025)
Publisher : Universitas Mercu Buana

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.22441/sinergi.2025.1.019

Abstract

The palm oil industry faces significant challenges in accurately classifying fruit ripeness, which is crucial for optimizing yield, quality, and profitability. Manual methods are slow and prone to errors, leading to inefficiencies and increased costs. Deep Learning, particularly the AlexNet architecture, has succeeded in image classification tasks and offers a promising solution. This study explores the implementation of AlexNet to improve the efficiency and accuracy of palm oil fruit maturity classification, thereby reducing costs and production time. We employed a dataset of 1500 images of palm oil fruits, meticulously categorized into three classes: raw, ripe, and rotten. The experimental setup involved training AlexNet and comparing its performance with a conventional Convolutional Neural Network (CNN). The results demonstrated that AlexNet significantly outperforms the traditional CNN, achieving a validation loss of 0.0261 and an accuracy of 0.9962, compared to the CNN's validation loss of 0.0377 and accuracy of 0.9925. Furthermore, AlexNet achieved superior precision, recall, and F-1 scores, each reaching 0.99, while the CNN scores were 0.98. These findings suggest that adopting AlexNet can enhance the palm oil industry's operational efficiency and product quality. The improved classification accuracy ensures that fruits are harvested at optimal ripeness, leading to better oil yield and quality. Reducing classification errors and manual labor can also lead to substantial cost savings and increased profitability. This study underscores the potential of advanced deep learning models like AlexNet in revolutionizing agricultural practices and improving industrial outcomes.
Enhancing Remote Sensing Image Resolution Using Convolutional Neural Networks Supardi, Julian; Samsuryadi, Samsuryadi; Satria, Hadipurnawan; Serrano, Philip Alger M.; Arnelawati, Arnelawati
Jurnal Elektronika dan Telekomunikasi Vol 24, No 2 (2024)
Publisher : National Research and Innovation Agency

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.55981/jet.653

Abstract

Remote sensing imagery is a very interesting topic for researchers, especially in the fields of image and pattern recognition. Remote sensing images differ from ordinary images taken with conventional cameras. Remote sensing images are captured from satellite photos taken far above the Earth's surface. As a result, objects in satellite images appear small and have low resolution when enlarged. This condition makes it difficult to detect and recognize objects in remote-sensing images. However, detecting and recognizing objects in these images is crucial for various aspects of human life. This paper aims to address the problem of remote sensing image quality. The method used is a convolutional neural network. The results show the proposed method can improve PSNR and SSIM compared to previous methods
PERFORMANCE COMPARISON OF FACENET PYTORCH AND KERAS FACENET METHODS FOR MULTI FACE RECOGNITION Dedy Fitriady Fitriady; Samsuryadi Samsuryadi; Anggina Primanita
Jurnal Media Elektrik Vol. 22 No. 1 (2024): MEDIA ELEKTRIK
Publisher : Jurusan Pendidikan Teknik Elektro

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.59562/metrik.v22i1.5899

Abstract

Face recognition has become an important technology in various applications, but challenges arise when multiple faces must be recognized simultaneously in a single image or video frame. This study develops a multi-face recognition system using the Multi-Task Cascaded Convolutional Neural Network (MTCNN) method for face detection, Pytorch Facenet and Keras Facenet for recognition, and Support Vector Machine (SVM) for classification. Using a dataset of 1000 images from 10 classes, this study compares the performance of Pytorch Facenet and Keras Facenet in terms of speed, memory usage, and accuracy. The results show that Pytorch Facenet is faster with an average of 0.15 seconds per image compared to Keras Facenet which requires 0.86 seconds per image, and is more efficient in memory usage with 384.19 MB lower. However, Pytorch Facenet uses 3% more CPU. In addition, in terms of accuracy, Pytorch Facenet shows a more stable and consistent confidence score. In conclusion, Pytorch Facenet proves to be more efficient and reliable for multi-face recognition, although it requires further CPU optimization for more optimal use in real application scenarios.
Accuracy of neural networks in brain wave diagnosis of schizophrenia Sukemi, Sukemi; Cahyadi, Gabriel Ekoputra Hartono; Samsuryadi, Samsuryadi; Akbar, Muhammad Agung
IAES International Journal of Artificial Intelligence (IJ-AI) Vol 14, No 2: April 2025
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/ijai.v14.i2.pp1311-1325

Abstract

This research explores the application of a modified deep learning model for electroencephalography (EEG) signal classification in the context of schizophrenia diagnosis. This study aims to utilize the temporal and spatial characteristics of EEG data to improve classification accuracy. Four popular convolutional neural network (CNN) architectures, namely LeNet-5, AlexNet, VGG16, and ResNet-18, are adapted to handle 1D EEG signals. In addition, a hybrid architecture of CNN-gated recurrent unit (GRU) and CNN-long short-term memory (LSTM) is proposed to capture spatial and temporal dynamics. The model was evaluated on a dataset consisting of EEG recordings from 14 patients with paranoid schizophrenia and 14 healthy controls. The results show high accuracy and F1 scores for all modified models, with CNN-LSTM and CNN-GRU achieving the highest performance with scores of 0.96 and 0.97, respectively. Receiver operating characteristic (ROC) curves demonstrate the model's ability to distinguish between healthy controls and schizophrenia patients. The proposed model offers a promising approach for automated schizophrenia diagnosis based on EEG signals, potentially assisting clinicians in early detection and intervention. Future work will focus on larger data sets and explore transfer learning techniques to improve the generalization ability of the model.
New approach to measuring researcher expertise using cosine similarity algorithm and association rules Firdaus, Ali; Stiawan, Deris; Samsuryadi, Samsuryadi; Budiarto, Rahmat
Bulletin of Electrical Engineering and Informatics Vol 14, No 5: October 2025
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/eei.v14i5.9506

Abstract

This study proposes a new method to assess researcher expertise using publication data. The quality of research publications is an important indicator in the ranking of universities that are undergoing diversification. Research publications have become an important indicator in the university ranking system and have a major impact on the reputation of universities as a lens for the study of expertise and prestige for human resources. Expertise is often difficult to verify objectively, as a result, many people claim to be experts or are considered experts without evidence and correct data. To ensure the expertise of researchers, it must be proven with valid data support through measurable and presentable expertise parameters. The model built uses the cosine similarity and association rule approaches. The publication variables attached to the researcher are formulated in the collaboration of the algorithm to assess the level of researcher expertise. Validation of important points of publications as parameters for measuring expertise has been identified as the main factor contributing to the measurement of researcher expertise and its impact on university reputation. The model built successfully validated researcher expertise up to 72% which is relevant to its support for university rankings up to 75%.
Advancing palm oil fruit ripeness classification using transfer learning in deep neural networks Kurniawan, Rudi; Samsuryadi, Samsuryadi; Susilawati Mohamad, Fatma; Oktafia Lingga Wijaya, Harma; Santoso, Budi
Bulletin of Electrical Engineering and Informatics Vol 14, No 2: April 2025
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/eei.v14i2.8651

Abstract

The palm oil industry is a significant component of Indonesia’s economy, driven by increasing global demand across various industries. Manual identification of palm oil fruit ripeness is often subjective and labor-intensive, creating a need for a faster and more accurate solution. This study proposes the use of deep learning models based on transfer learning to enhance the classification of palm oil fruit ripeness. Our research evaluates several models, finding that ResNet152V2 achieves the highest performance with superior accuracy and the lowest validation loss. DenseNet201, MobileNet, and InceptionV3 also deliver strong results, each demonstrating an accuracy above 0.99 and a validation loss below 0.04. Cross-validation confirms that ResNet152V2, DenseNet201, and MobileNet maintain high and consistent performance across different folds, showcasing their stability and reliability. This approach provides a promising alternative to manual methods, offering a more efficient and precise means for determining palm oil fruit ripeness, which could significantly benefit the industry by streamlining quality control processes.
Improving the Accuracy of Concrete Mix Type Recognition with ANN and GLCM Features Based on Image Resolution Gasim, Gasim; Heriansyah, Rudi; Puspasari, Shinta; Irfani, Muhammad Haviz; Purnamasari, Evi; Permatasari, Indah; Samsuryadi, Samsuryadi
JURNAL INFOTEL Vol 17 No 1 (2025): February 2025
Publisher : LPPM INSTITUT TEKNOLOGI TELKOM PURWOKERTO

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.20895/infotel.v17i1.1201

Abstract

Concrete is an essential construction material that is often used due to its strength and durability, but its mix type identification often relies on conventional methods that are less efficient and accurate. This research aims to evaluate the effect of image resolution on the accuracy of concrete mix type recognition using Artificial Neural Network (ANN) and Gray-Level Co-Occurrence Matrix (GLCM) features. The method used involves analysing concrete images at various resolutions: 200 x 200, 300 x 300, 400 x 400, 500 x 500, 600 x 600, and 700 x 700 pixels. The experimental results show that higher image resolutions tend to improve recognition accuracy. all types of image sizes using 1,250 training data and 250 test data. Image sizes of 200 x 200 and 300 x 300 pixels give low accuracy of 42% and 45% respectively, while sizes of 400 x 400 and 500 x 500 pixels show an increase in accuracy to 60.5% and 62.5%. The higher resolutions of 600 x 600 and 700 x 700 pixels produced the highest accuracy of 68% and 70%, respectively. These results indicate that larger image resolutions are able to capture more details and characteristics required for more accurate concrete mix type recognition. This research has implications for improving efficiency and consistency in concrete inspection in the construction industry through the use of AI-based image recognition methods.
Co-Authors Agus Mistiawan Ahmad Fali Oklilas Ahmad Heryanto Akbar, M. Agung Ali Firdaus Anna Dwi Marjusalinah Apit Fathurohman Apriansyah Putra Aprilisa, Shinta Archibald Hutahaean, Jerrel Adriel Ardina Ariani Ardina Ariani Ariani, Ardina Arnelawati, Arnelawati Astuti, Dwi Lydia Zuharah Ayu Luviyanti Tanjung Azhar Azhar Bambang Tutuko Barlian Khasoggi Buchari, Muhammad Ali Cahyadi, Gabriel Ekoputra Hartono Darmawahyuni, Annisa Darmawijoyo, Darmawijoyo Dedy Fitriady Fitriady Deris Stiawan Desty Rodiah Dewy Yuliana Dian Palupi Rini Dian Palupi Rini Dian Palupi Rini Dian Palupi Rini Duano Sapta Nusantara Dwi Budi Santoso Dwi Lydia Zuharah Astuti Dwi Lydia Zuharah Astuti Dwi Meylitasari Tarigan Ermatita - Erni Erni Esti Susiloningsih Fatma Susilawati Mohamad Firdaus Firdaus gasim, Gasim Hadipurnama Satria Hadipurnawan Satria Hasby Rifky Indah Permatasari Islami, Anggun Jambak, Muhammad Ihsan Jayanti Jayanti Julian Supardi Khairun Nisa Kurniabudi, Kurniabudi Leni Marlina Lingga Wijaya, Harma Oktafia Lintang Auliya Kurdiati Lintang Auliya Kurdiati M. Nejatullah Sidqi Marlina Sylvia Meryansumayeka Meryansumayeka Mohamad, Fatma Susilawati Muhammad Fachrurrozi Muhammad Haviz Irfani Muhammad Naufal Rachmatullah Mukhlis Febriady Murniati . Nur Rachmat Oktafia Lingga Wijaya, Harma Primanita, Anggina Purnama, Benni Purnamasari, Evi Rahmat Budiarto Ramadhan, Muhammad Fajar Ratu Ilma Indra Putri Rifkie Primartha Risda Intan Sistyawati Riszky Pabela Pratiwi Rizq Khairi Yazid Rossi Passarella Rudi Heriansyah, Rudi Rudi Kurniawan Rudi Kurniawan Saparudin Saparudin Sapitri, Ade Iriani Serrano, Philip Alger M. Sharipuddin, Sharipuddin Shinta Puspasari Sisca Puspita Sepriliani Siti Nurmaini Sukemi Sukemi Sukemi Sukemi Susilawati Mohamad, Fatma Sutarno Sutarno Tri Kurnia Sari Vincen, Vincen Willy, Willy Yesinta Florensia Yogi Tiara Pratama Yulia Hapsari Yundari, Yundari Zahra Alwi Zulkardi Zulkardi