Claim Missing Document
Check
Articles

Found 13 Documents
Search
Journal : JOIV : International Journal on Informatics Visualization

Convolutional Neural Network featuring VGG-16 Model for Glioma Classification Agus Eko Minarno; Sasongko Yoni Bagas; Munarko Yuda; Nugroho Adi Hanung; Zaidah Ibrahim
JOIV : International Journal on Informatics Visualization Vol 6, No 3 (2022)
Publisher : Politeknik Negeri Padang

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.30630/joiv.6.3.1230

Abstract

Magnetic Resonance Imaging (MRI) is a body sensing technique that can produce detailed images of the condition of organs and tissues. Specifically related to brain tumors, the resulting images can be analyzed using image detection techniques so that tumor stages can be classified automatically. Detection of brain tumors requires a high level of accuracy because it is related to the effectiveness of medical actions and patient safety. So far, the Convolutional Neural Network (CNN) or its combination with GA has given good results. For this reason, in this study, we used a similar method but with a variant of the VGG-16 architecture. VGG-16 variant adds 16 layers by modifying the dropout layer (using softmax activation) to reduce overfitting and avoid using a lot of hyper-parameters. We also experimented with using augmentation techniques to anticipate data limitations. Experiment using data The Cancer Imaging Archive (TCIA) - The Repository of Molecular Brain Neoplasia Data (REMBRANDT) contains MRI images of 130 patients with different ailments, grades, races, and ages with 520 images. The tumor type was Glioma, and the images were divided into grades II, III, and IV, with the composition of 226, 101, and 193 images, respectively. The data is divided by 68% and 32% for training and testing purposes. We found that VGG-16 was more effective for brain tumor image classification, with an accuracy of up to 100%. 
Classification of Malaria Using Convolutional Neural Network Method on Microscopic Image of Blood Smear Minarno, Agus Eko; Izzah, Tsabita Nurul; Munarko, Yuda; Basuki, Setio
JOIV : International Journal on Informatics Visualization Vol 8, No 3 (2024)
Publisher : Society of Visual Informatics

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.62527/joiv.8.3.2154

Abstract

Malaria, a critical global health issue, can lead to severe complications and mortality if not treated promptly. The conventional diagnostic method, involving a microscopic examination of blood smears, is time-consuming and requires extensive expertise. To address these challenges, computer-assisted diagnostic methods have been explored. Among these, Convolutional Neural Networks (CNN), a deep learning technique, has shown considerable promise for image classification tasks, including the analysis of microscopic blood smear images. In this study, we employ the NIH Malaria dataset, which consists of 27,558 images, to train a CNN model. The dataset is divided into parasitized (malaria-infected) and uninfected. The CNN architecture designed for this study includes three convolutional layers and two fully connected layers. We compare the performance of this model with that of a pre-trained VGG-16 model to determine the most effective approach for malaria diagnosis. The proposed CNN model demonstrates high accuracy, achieving a value of 96.81%. Furthermore, it records a recall of 0.97, a precision of 0.97, and an F1-score of 0.97. These metrics indicate a robust performance, outperforming previous studies and highlighting the model's potential for accurate malaria diagnosis. This study underscores the potential of CNN in medical image classification and supports its implementation in clinical settings to enhance diagnostic accuracy and efficiency. The findings suggest that with further refinement and validation, such models could significantly improve the speed and reliability of malaria diagnostics, ultimately aiding in better disease management and patient outcomes.
Classification of Dermoscopic Images Using CNN-SVM Minarno, Agus Eko; Fadhlan, Muhammad; Munarko, Yuda; Chandranegara, Didih Rizki
JOIV : International Journal on Informatics Visualization Vol 8, No 2 (2024)
Publisher : Society of Visual Informatics

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.62527/joiv.8.2.2153

Abstract

Traditional machine learning methods like GLCM and ABCD rules have long been employed for image classification tasks. However, they come with inherent limitations, primarily the need for manual feature extraction. This manual feature extraction process is time-consuming and relies on expert domain knowledge, making it challenging for non-experts to use effectively. Deep learning methods, specifically Convolutional Neural Networks (CNN), have revolutionized image classification by automating the feature extraction. CNNs can learn hierarchical features directly from the raw pixel values, eliminating the need for manual feature engineering. Despite their powerful capabilities, CNNs have limitations, mainly when working with small image datasets. They may overfit the data or struggle to generalize effectively. In light of these considerations, this study adopts a hybrid approach that leverages the strengths of both deep learning and traditional machine learning. CNNs are automatic feature extractors, allowing the model to capture meaningful image patterns. These extracted features are then fed into a Support Vector Machine (SVM) classifier, known for its efficiency and effectiveness in handling small datasets. The results of this study are encouraging, with an accuracy of 0.94 and an AUC score of 0.94. Notably, these metrics outperform Abbas' previous research by a significant margin, underscoring the effectiveness of the hybrid CNN-SVM approach. This research reinforces that SVM classifiers are well-suited for tasks involving limited image data, yielding improved classification accuracy and highlighting the potential for broader applications in image analysis.
Classification of Diabetic Retinopathy Based on Fundus Image Using InceptionV3 Minarno, Agus Eko; Bagaskara, Andhika Dwija; Bimantoro, Fitri; Suharso, Wildan
JOIV : International Journal on Informatics Visualization Vol 9, No 1 (2025)
Publisher : Society of Visual Informatics

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.62527/joiv.9.1.2155

Abstract

Diabetic Retinopathy (DR) is a progressive eye condition that can lead to blindness, particularly affecting individuals with diabetes. It is commonly diagnosed through the examination of digital retinal images, with fundus photography being recognized as a reliable method for identifying abnormalities in the retina of diabetic patients. However, manual diagnosis based on these images is time-consuming and labor-intensive, necessitating the development of automated systems to enhance both accuracy and efficiency. Recent advancements in machine learning, particularly image classification systems, provide a promising avenue for streamlining the diagnostic process. This study aims to classify DR using Convolutional Neural Networks (CNN), explicitly employing the InceptionV3 architecture to optimize performance. This research also explores the impact of different preprocessing and data augmentation techniques on classification accuracy, focusing on the APTOS 2019 Blindness Detection dataset. Data preprocessing and augmentation are crucial steps in deep learning to enhance model generalization and mitigate overfitting. The study uses preprocessing and data augmentation to train the InceptionV3 model. Results indicate that the model achieves 86.5% accuracy on training data and 82.73% accuracy on test data, significantly improving performance compared to models trained without data augmentation. Additionally, the findings demonstrate that the absence of data augmentation leads to overfitting, as evidenced by performance graphs that show a marked decline in test accuracy relative to training accuracy. This research highlights the importance of tailored preprocessing and augmentation techniques in improving CNN models' robustness and predictive capability for DR detection. 
Classification of Skin Cancer Images Using Convolutional Neural Network with ResNet50 Pre-trained Model Minarno, Agus Eko; Lusianti, Aaliyah; Azhar, Yufis; Wibowo, Hardianto
JOIV : International Journal on Informatics Visualization Vol 8, No 4 (2024)
Publisher : Society of Visual Informatics

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.62527/joiv.8.4.2156

Abstract

The skin, an astonishingly expansive organ within the human body, plays a pivotal role in safeguarding us against the environment's harsh elements. It acts as a formidable barrier, shielding our delicate internal systems from the scorching heat of the sun and the harmful effects of relentless exposure to light. Nevertheless, it is not impervious to damage, especially when subjected to excessive sunlight and the potentially hazardous ultraviolet (UV) radiation that accompanies it. Prolonged UV exposure can wreak havoc on our skin cells, potentially setting the stage for the development of skin cancer. This condition demands prompt and accurate diagnosis for effective treatment. To address the pressing need for swift and precise skin cancer diagnosis, cutting-edge technology has come to the fore in the form of deep learning systems. These sophisticated systems have been meticulously designed and trained to classify skin lesions autonomously with remarkable accuracy. The Convolutional Neural Network (CNN) architecture is a formidable choice for handling image classification tasks among the array of deep learning techniques. In a recent breakthrough study, a CNN-based model was meticulously constructed to explicitly classify skin lesions, leveraging the power of a pre-trained ResNet50 architectural model to augment its capabilities. This groundbreaking ResNet50 architecture was meticulously trained to classify seven distinct skin lesions, surpassing the performance of its predecessor, MobileNet. The results achieved in this endeavor are nothing short of impressive. The overall accuracy of the ResNet50 model stands at a commendable 87.42% when tasked with classifying the seven diverse classes within the dataset. Delving further into its proficiency, we find that the Top2 and Top3 accuracy rates soar to an astounding 95.52% and 97.86%, respectively, illustrating the model's exceptional precision and reliability.
Leveraging ESRGAN for High-Quality Retrieval of Low-Resolution Batik Pattern Datasets Azhar, Yufis; Marthasari, Gita Indah; Regata Akbi, Denar; Minarno, Agus Eko; Haqim, Gilang Nuril
JOIV : International Journal on Informatics Visualization Vol 9, No 2 (2025)
Publisher : Society of Visual Informatics

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.62527/joiv.9.2.3202

Abstract

As one of the world's cultural heritages in Indonesia, batik is one of the quite interesting research subjects, including in the realm of image retrieval. One of the inhibiting factors in searching for batik images relevant to the query image input by the user is the low resolution of the batik images in the dataset. This can affect the dataset's quality, which automatically also impacts the model's performance in recognizing batik motifs with complex details and textures. To address this problem, this study proposes using the Enhanced Super-Resolution Generative Adversarial Network (ESRGAN) method to increase the resolution of batik images. By increasing the resolution, it is hoped that ESRGAN can clarify the details and textures of the initial low-resolution image so that these features can be extracted better. This study proves that ESRGAN can produce high-resolution batik images while maintaining the details of the batik motif itself. The resulting image's high PSNR and low MSE values confirm this. The implementation of ESRGAN has also been proven to improve the performance of the image retrieval system with an increase in precision and average precision values between 1-5% compared to other methods that do not implement it.
Batik Classification using Microstructure Co-occurrence Histogram Minarno, Agus Eko; Soesanti, Indah; Nugroho, Hanung Adi
JOIV : International Journal on Informatics Visualization Vol 8, No 1 (2024)
Publisher : Society of Visual Informatics

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.62527/joiv.8.1.2152

Abstract

Batik Nitik is a distinctive form of batik originating from the culturally rich region of Yogyakarta, Indonesia. What sets it apart from other batik styles is its remarkable motif similarity, a characteristic that often poses a considerable challenge when attempting to distinguish one design from another. To address this challenge, extensive research has been conducted with the primary objective of classifying Batik Nitik, and this research leverages an innovative approach combining the microstructure histogram and gray level co-occurrence matrix (GLCM) techniques, collectively referred to as the Microstructure Co-occurrence Histogram (MCH).The MCH method offers a multi-faceted approach to feature extraction, simultaneously capturing color, texture, and shape attributes, thereby generating a set of local features that faithfully represent the intricate details found in Batik Nitik imagery. In parallel, the GLCM method excels at extracting robust texture features by employing statistical measures to portray the subtle nuances within these batik patterns. Nevertheless, the mere fusion of microstructure and GLCM features doesn't inherently guarantee superior classification performance. This research paper has meticulously examined many feature fusion scenarios between microstructure and GLCM to pinpoint the optimal configuration that would yield the most accurate results. The dataset used consists of 960 Batik Nitik samples, comprising 60 categories. The classifiers employed in this study are K-Nearest Neighbor (KNN), Support Vector Machine (SVM), Decision Tree (DT), Naïve Bayes (NB), and Linear Discriminant Analysis (LDA). Based on the experimental results, the fusion of microstructure and GLCM features with the (LDA) classifier yields the best performance compared to other scenarios and classifiers.
Enhanced BatikGAN SL Model for High-Quality Batik Pattern Generation Minarno, Agus Eko; Akbi, Denar Regata; Munarko, Yuda
JOIV : International Journal on Informatics Visualization Vol 9, No 3 (2025)
Publisher : Society of Visual Informatics

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.62527/joiv.9.3.3096

Abstract

Batik represents one of the most prominent traditional cultural forms in Indonesia, serving not only as an art form but also as a symbol of cultural identity and heritage. The creation of intricate and unique Batik patterns is a highly skilled craft that has been passed down through generations. Still, modern efforts to innovate and enhance Batik designs face significant challenges. Specifically, there is a growing demand for high-quality Batik patterns that maintain the aesthetic and cultural value of traditional motifs while incorporating modern design elements. This study aims to address these challenges by introducing an enhanced BatikGAN SL model that leverages local features. The model's performance was rigorously evaluated using the Batik Nitik dataset, which consists of 126 Batik motifs collected from artisans in Yogyakarta, a region renowned for its rich Batik traditions. This dataset allowed for a robust testing ground, representing a diverse array of motifs and styles. In comparative evaluations, the enhanced BatikGAN SL model outperformed not only its predecessor but also models utilizing histogram-equalized datasets, which are often employed to improve image contrast. Key metrics, including the Fréchet Inception Distance (FID) score of 20.087, Peak Signal-to-Noise Ratio (PSNR) of 25.665, and Structural Similarity Index Measure (SSIM) of 0.918, demonstrated significant improvements in both the visual and technical quality of the generated Batik patterns. These metrics indicate that the proposed model excels in producing patterns with more precise details, better contrast, and higher overall image fidelity when compared to previous approaches.
Batik Image Representation using Multi Texton Co-occurrence Histogram Minarno, Agus Eko; Soesanti, Indah; Nugroho, Hanung Adi
JOIV : International Journal on Informatics Visualization Vol 8, No 3-2 (2024): IT for Global Goals: Building a Sustainable Tomorrow
Publisher : Society of Visual Informatics

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.62527/joiv.8.3-2.3095

Abstract

This paper introduces a novel approach to batik image representation using the texton-based and statistical Multi Texton Co-occurrence Histogram (MTCH). The MTCH framework is leveraged as a robust batik image descriptor, capable of encapsulating a comprehensive range of visual features, including the intricate interplay of color, texture, shape, and statistical attributes. The research extensively evaluates the effectiveness of MTCH through its application on two well-established public batik datasets, namely Batik 300 and Batik Nitik 960. These datasets serve as benchmarks for assessing the performance of MTCH in both classification and image retrieval tasks. In the classification domain, four distinct scenarios were explored, employing various classifiers: the K-Nearest Neighbors (K-NN), Support Vector Machine (SVM), Decision Tree (DT), and Naïve Bayes (NB). Each classifier was rigorously tested to determine its efficacy in correctly identifying batik patterns based on the MTCH descriptors. On the other hand, the image retrieval tasks were conducted using several distance metrics, including the Euclidean distance, City Block, Bray Curtis, and Canberra, to gauge the retrieval accuracy and the robustness of the MTCH framework in matching similar batik images. The empirical results derived from this study underscore the superior performance of the MTCH descriptor across all tested scenarios. The evaluation metrics, including accuracy, precision, and recall, indicate that MTCH not only achieves high classification performance but also excels in retrieving images with high similarity to the query. These findings suggest that MTCH is a highly effective tool for batik image analysis, offering significant potential for applications in cultural heritage preservation, textile pattern recognition, and automated batik classification systems.
Classification of Malaria Cell Image using Inception-V3 Architecture Minarno, Agus Eko; Aripa, Laofin; Azhar, Yufis; Munarko, Yuda
JOIV : International Journal on Informatics Visualization Vol 7, No 2 (2023)
Publisher : Society of Visual Informatics

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.30630/joiv.7.2.1301

Abstract

Malaria is a severe global public health problem caused by the bite of infected mosquitoes. It can be cured, but only with early detection and effective, quick treatment. It can cause severe conditions if not properly diagnosed and treated at an early stage. In the worst scenario, it can cause death. This study aims at focusing on classifying malaria cell images. Malaria is classified as a dangerous disease caused by the bite of the female Anophles mosquito. As such, it leads to mortality when immediate action and treatment fails to be administered. In particular, this study aims to classify malaria cell images by utilizing the Inception-V3 architecture. In this study, training was conducted on 27,558 malaria cell image data through Inception-V3 architecture by proposing 3 scenarios. The proposed scenario 1 model applies the SGD optimizer to generate a loss value of 0.13 and an accuracy value of 0.95; scenario 2 model applies the Adam optimizer to generate a loss value of 0.09 and an accuracy value of 0.96; and lastly scenario 3 implements the RMSprop optimizer to generate a loss value of 0.08 and an accuracy value of 0.97. Applying the three scenarios, the results of the study apparently indicate that the Inception-V3 model using the RMSprop optimizer is capable of providing the best accuracy results with an accuracy of 97% with the lowest loss value, compared to scenario 1 and scenario 2. Further, the test results confirms that the proposed model in this study is capable of classifying malaria cells effectively.
Co-Authors Abu Abbas Mansyur Achmad Fauzi Saksenata Ahmad Annas Al Hakim Ahmad Faiz, Ahmad Ahmad Heryanto, Ahmad Akbi, Denar Regata Alfarizy, Muhammad Rifal Alfian Yuniarto Anbiya, Dhika Rizki Andhika Pranadipa Andrian Rakhmatsyah Aria Maulana Eka Mahendra Arif Bagus Nugroho Aripa, Laofin Arrie Kurniawardhani arrie kurniawardhany, arrie AULIA ARIF WARDANA Ayu Septya Maulani Bagaskara, Andhika Dwija Basuki, Setio Bayu Yudha Purnomo Bella Dwi Mardiana Chandranegara, Didih Rizki Cokro Mandiri, Mochammad Hazmi Deris Stiawan Dwi Rahayu Dyah Ayu Irianti Eko Budi Cahyono Fachry Abda El Rahman Feny Aries Tanti Firdhansyah Abubekar Fitri Bimantoro Galang Aji Mahesa Gita Indah Marthasari Hanung Adi Nugroho Haqim, Gilang Nuril Hardianto Wibowo Hariyady Hariyady Harmanto, Dani Hasanuddin, Muhammad Yusril Hazmi Cokro Mandiri, Mochammad Ibrahim, Zaidah Indah Soesanti Iqbal Fairus Zamani Irfan, Muhammad irma fitriani Izzah, Tsabita Nurul Lailis Syafa'ah Lailis Syafa’ah Linggar Bagas Saputro Lusianti, Aaliyah Mandiri, Mochammad Hazmi Cokro Moch Ilham Ramadhani Moch. Chamdani Mustaqim Muhammad Afif Muhammad Azhar Ridani Muhammad Hussein Muhammad Nafi Maula Hakim Muhammad Nasrul Tsalatsa Putra Muhammad Nuchfi Fadlurrahman Nanik Suciati Naser Jawas, Naser Nia Dwi Nurul Safitri Noor Aini Mohd Roslan Norizan Mat Diah Prabowo, Christian Ramadhani, Moch Ilham Rangga Kurnia Putra Wiratama Ratna Sari Riksa Adenia Rizalwan Ardi Ramandita Rizka Nurlizah Sabrila, Trifebi Shina Sari, Veronica Retno Sari, Zamah Sasongko Yoni Bagas Setiyo Kantomo, Ilham Sumadi, Fauzi Dwi Setiawan Suryani Rachmawati Suseno, Jody Ririt Krido Toton Dwi Antoko Trifebi Shina Sabrila Tsabitah Ayu Ulfah Nur Oktaviana Veronica Retno Sari Vizza Dwi Wahyu Andhyka Kusuma Wahyu Budi Utomo Wicaksono, Galih Wasis Wicaksono, Galih Wasis Widya Rizka Ulul Fadilah Wildan Suharso Yesicha Amilia Putri Yoga Anggi Kurniawan Yuda Munarko Yudhono Witanto Yufis Azhar Yundari, Yundari Zaidah Ibrahim Zaidah Ibrahim Zamah Sari Zamani, Iqbal Fairus