Claim Missing Document
Check
Articles

Found 3 Documents
Search
Journal : JOIV : International Journal on Informatics Visualization

Batik Classification Using Convolutional Neural Network with Data Improvements Dewa Gede Trika Meranggi; Novanto Yudistira; Yuita Arum Sari
JOIV : International Journal on Informatics Visualization Vol 6, No 1 (2022)
Publisher : Society of Visual Informatics

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.30630/joiv.6.1.716

Abstract

Batik is one of the Indonesian cultures that UNESCO has recognized. Batik has a variety of unique and distinctive patterns that reflect the area of origin of the batik motif. Batik motifs usually have a 'core motif' printed repeatedly on the fabric. The entry of digitization makes batik motif designs more diverse and unique. However, with so many batik motifs spread on the internet, it is difficult for ordinary people to recognize the types of batik motifs. This makes an automatic classification of batik motifs must continue to be developed. Automation of batik motif classification can be assisted with artificial intelligence. Machine learning and deep learning have produced much good performance in image recognition. In this study, we use deep learning based on a Convolutional Neural Network (CNN) to automate the classification of batik motifs. There are two datasets used in this study. The old dataset comes from a public repository with 598 data with five types of motifs. Meanwhile, the new dataset updates the old dataset by replacing the anomalous data in the old dataset with 621 data with five types of motifs. The lereng motif is changed to pisanbali due to the difficulty of obtaining the lereng motif. Each dataset was divided into three ways: original, balance patch, and patch. We used ResNet-18 architecture, which used a pre-trained model to shorten the training time. The best test results were obtained in the new dataset with the patch way of 88.88 % ±0.88, and in the old dataset, the best accuracy was found in the patch way on the test data of 66.14 % ±3.7. The data augmentation in this study did not significantly affect the accuracy because the most significant increase in accuracy is only up to 1.22%.
Chest X-Ray Images Clustering using Convolutional Autoencoder for Lung Disease Detection Syafira, Putri Amanda; Yudistira, Novanto; Kurnianingtyas, Diva
JOIV : International Journal on Informatics Visualization Vol 9, No 2 (2025)
Publisher : Society of Visual Informatics

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.62527/joiv.9.2.2478

Abstract

In healthcare, medical imaging is commonly used for health assessments. One of the most commonly used types of medical imaging is X-ray imaging. One area that often undergoes examination using this modality is the lungs, where healthcare professionals use X-ray images to interpret the results. However, prolonged interpretation of X-ray results by healthcare professionals and other work activities can lead to errors and potentially result in invalid disease identification. There is a need for a system that can classify the detection results from these images to assist healthcare professionals in their tasks. Various methods can be used for this purpose, such as classification, clustering, segmentation, etc. However, data labeling requires significant resources and costs, especially with large-scale datasets. One possible solution is to use an unsupervised learning approach to address this. One method under unsupervised learning is clustering, which allows the system to process and understand data patterns without needing external annotations or manual labeling. This research uses an autoencoder as a subcategory of unsupervised learning. This is because autoencoders can automatically extract relevant features from the data without needing external label guidance. The research utilizes a dataset consisting of 700 X-ray images of the chest, including 500 images showing disease and 200 normal X-ray images. This research aims to determine the effectiveness of clustering methods using an autoencoder model in grouping X-ray image results. The research conducted two experiments. In the first experiment, an autoencoder with 18 Layers was used, resulting in the best performance with a value of K=15 and a rand index of 76%. In the second experiment, an autoencoder with a reduced number of Layers (11 Layers) was used, and it achieved the best performance with a value of K=15 and a rand index of 87%.
Facial Expression Recognition Using Convolutional Neural Network with Attention Module Khoirullah, Habib Bahari; Yudistira, Novanto; Bachtiar, Fitra Abdurrachman
JOIV : International Journal on Informatics Visualization Vol 6, No 4 (2022)
Publisher : Society of Visual Informatics

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.30630/joiv.6.4.963

Abstract

Human Activity Recognition (HAR) is an introduction to human activities that refer to the movements performed by an individual on specific body parts. One branch of HAR is human emotion. Facial emotion is vital in human communication to help convey emotional states and intentions. Facial Expression Recognition (FER) is crucial to understanding how humans communicate. Misinterpreting Facial Expressions can lead to misunderstanding and difficulty reaching a common ground. Deep Learning can help in recognizing these facial expressions. To improve the probation of Facial Expressions Recognition, we propose ResNet attached with an Attention module to push the performance forward. This approach performs better than the standalone ResNet because the localization and sampling grid allows the model to learn how to perform spatial transformations on the input image. Consequently, it improves the model's geometric invariance and picks up the features of the expressions from the human face, resulting in better classification results. This study proves the proposed method with attention is better than without, with a test accuracy of 0.7789 on the FER dataset and 0.8327 on the FER+ dataset. It concludes that the Attention module is essential in recognizing Facial Expressions using a Convolutional Neural Network (CNN). Advice for further research first, add more datasets besides FER and FER+, and second, add a Scheduler to decrease the learning rate during the training data.
Co-Authors Abdurrachman Bachtiar, Fitra Abel Filemon Haganta Kaban Achmad Basuki Achmad Ridok Adam Hendra Brata Adhi Setiawan Aditama, Gustian Agi Putra Kharisma Agus Wahyu Widodo Agus Wahyu Widodo Agus Wahyu Widodo, Agus Wahyu Akbar, Alvin Tarisa Al Huda, Fais Aldi Fianda Putra Alfen Hasiholan Almasyhur, Muhammad Bin Djafar Alwan, Muhammad Fajrul Amin, Muhammad Basil Musyaffa Anarya Indika Putra Andina, Sherla Puspa Anggraheni, Hanna Shafira Annisa Sukmawati Apriyanti -, Apriyanti Ardhani, Luthfi Afrizal Ardhanto, Riyadh Ilham Arifandis Winata Arifien, Zainal Asmani, Wahayu Widyaning Austin, Yehezkiel Stephanus Bahrur Rizki Putra Surya Bana Falakhi Bayu Rahayudi Budi Darma Setiawan Caesar Rio Anggina Toruan Cahyo Prayogo, Cahyo Candra Dewi Cevita Detri Intan Suryaningrum Chindy Aulia Sari Christopher, Juan Young Darmawan, Abizard Hashfi Darmawan, Hanif Daud, Nathan Daut Daman Dewa Gede Trika Meranggi Dhaifullah, Afif Naufal Dhifan Diandra H Didik Suprayogo Dytha Suryani Edy Santoso Edy Santoso Elmira Faustina Achmal Eriq Muhammad Adams Jonemaro Fadhil Yusuf Rahadika Fadhil Yusuf Rahadika Fadhil Yusuf Rahadika Fahmi Achmad Fauzi Fajrina, Julia Nur Fathina Atsila F Fauzi, Muhammad Rifqi Firhan Fauzan Hamdani Fitra Abdurrachman Bachtiar Griselda Anjeli Sirait Griselda Anjeli Sirait Hafshah Durrotun Nasihah Hakim, Gibran Hakim, Sulthan Abiyyu Hanum, Assyfa Rasida Haris, Asmuni Harlan, Fajri Rayrahman Hawari, Rahmada Zulvia Azzahra Hermanto, Putri Tsania Maulidia Heru Nurwarsito Huda, Fais Al Hutamaputra, William Ikhwanul Kiram, Muh Zaqi Imam Cholissodin Indriati Indriati Iqra Ilhamsyah Irfan Ardiansyah Irfannanto, Adimas Irfano, Haikal Irwanto, M. Sofyan Izzatul Azizah Jauhar Bariq Rachmadi Javier Ardra Figo Karina Amadea Katrina Puspita Kevin Nadio Dwi Putra Khalid Rahman Khoirullah, Habib Bahari Krisnabayu, Rifky Yunus Kurnia Fakhrul Izza Kurnianingtyas, Diva Lailil Muflikhah Laksono, Khansa Salsabila Sangdiva Larasati, Saqina Salsabila Lutfi, Raniyah Mahardika, Mohammad Alfiano Rizky Manurung, Daniel Geoffrey Marasitua, Wahyu Valentino Marji Marpaung, Veronika Oktafia Maulana Ahmad Maliki Maulana, Muhammad Taufik Mawarni, Marrisaeka Meilinda Dwi Puspaningrum Michael David Muh. Arif Rahman Muhammad Rizaldi Muhammad Rizaldi Muhammad Tanzil Furqon Muhammad Zaini Rahman Natanniel Eka Christyanto Naufal, Muhammad Jilan Niluh Putu Vania Dyah Saraswati Nisa, Lisa N. Nisa, Septia Khoirin Novianti, Siska Nurannisa, Nadhira Oakley, Simon Pangondian, Yosia Permadhi, Raditya Atmaja Satria Pinasthika, Mohammad Ryan Prais Sarah Kayaningtias Prasetia, Anugrah Prayata, Rakan Fadhil Putra Pandu Adikara Putra, Octo Perdana Putri, Rania Aprilia Dwi Setya Putri, Salwa Cahyani Qurrata Ayuni Rahmadi, Anang Bagus Rahman, Muhammad Arif Raihan Hanif F RAMADHAN, ADITYA RIZKY Randy Cahya Wihandika Renata Rizki Rafi' Athallah Rian Nugroho Rilinka Rilinka Rishani Putri Aprilli Rizal Setya Perdana Rizky, Audhinata Bebytama RR. Ella Evrita Hestiandari Sabriansyah Rizqika Akbar, Sabriansyah Sahirah, Rafifa Addin Saputra, Kylix Eza Sastomo, Yogi Puji Selle, Nurfatima Setyawan Purnomo Sakti Sholeh, Mahrus Stephen Lui, Michael Sugihdharma, Joseph Ananda Sukma, Lintang Cahyaning Sulthon Akhdan G Suprapto Suprapto Sutrisna, Naufal Putra Syafira, Putri Amanda Tampubolon, Agustinus Parasian Thiodorus, Gustavo Timothy Bastian Sianturi Usfita Kiftiyani Vasya, Muhammad Azka Obila Wa Ode May Zhara Averina Wahyu Taufiqurrahman, Rayhan Waludi, Ikbal Wayan Firdaus Mahmudy Wulandari, Rafifah Ayud Yuita Arum Sari Yuita Arum Sari Zetha, Ivykaeyla Adriana