Claim Missing Document
Check
Articles

HYPERPARAMETER OPTIMIZATION OF CONVOLUTIONAL NEURAL NETWORK FOR FLOWER IMAGE CLASSIFICATION USING GRID SEARCH ALGORITHMS Wibowo, Della Aulia; Suciati, Nanik; Yuniarti, Anny
Jurnal Teknik Informatika (Jutif) Vol. 5 No. 1 (2024): JUTIF Volume 5, Number 1, February 2024
Publisher : Informatika, Universitas Jenderal Soedirman

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.52436/1.jutif.2024.5.1.1798

Abstract

Indonesia is a country with a tropical climate that greatly affects agriculture. Flowering plants are estimated to account for 25% of species in Indonesia; there are 416 families, 13,164 genera, and 295,383 species of flowering plants. Classification of profit types is a time- and knowledge-intensive job. Convolutional Neural Network (CNN) has revolutionized the field of computer vision by improving the accuracy of image, text, voice, and video recognition. This research is focused on developing a CNN model for Indonesian flower images by optimizing hyperparameters combined with a grid search algorithm and default parameters, as well as comparing two different CNN architectures, namely VGG16 and MobileNetV2. This research aims to improve the classification accuracy of Indonesian flower images by optimizing hyperparameters. The results of CNN research with hyperparameters combined with a grid search algorithm and using data augmentation resulted in MobileNetV2 as the best model. Grid search is designed to get the best value of each parameter. The performance of the grid search algorithm can produce an optimal combination of parameters, with a test accuracy of 89.62%..
Ground Coverage Classification in UAV Image Using a Convolutional Neural Network Feature Map Maulidiya, Erika; Fatichah, Chastine; Suciati, Nanik; Sari, Yuslena
Journal of Information Systems Engineering and Business Intelligence Vol. 10 No. 2 (2024): June
Publisher : Universitas Airlangga

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.20473/jisebi.10.2.206-216

Abstract

Background: To understand land transformation at the local level, there is a need to develop new strategies appropriate for land management policies and practices. In various geographical research, ground coverage plays an important role particularly in planning, physical geography explorations, environmental analysis, and sustainable planning. Objective: The research aimed to analyze land cover using vegetation density data collected through remote sensing. Specifically, the data assisted in land processing and land cover classification based on vegetation density. Methods: Before classification, image was preprocessed using Convolutional Neural Network (CNN) architecture's ResNet 50 and DenseNet 121 feature extraction methods. Furthermore, several algorithm were used, namely Decision Tree, Naí¯ve Bayes, K-Nearest Neighbor, Random Forest, Support Vector Machine (SVM), and eXtreme Gradient Boosting (XGBoost). Results: Classification comparison between methods showed that using CNN method obtained better results than machine learning. By using CNN architecture for feature extraction, SVM method, which adopted ResNet-50 for feature extraction, achieved an impressive accuracy of 85%. Similarly using SVM method with DenseNet121 feature extraction led to a performance of 81%. Conclusion: Based on results comparing CNN and machine learning, ResNet 50 architecture performed the best, achieving a result of 92%. Meanwhile, SVM performed better than other machine learning method, achieving an 84% accuracy rate with ResNet-50 feature extraction. XGBoost came next, with an 82% accuracy rate using the same ResNet-50 feature extraction. Finally, SVM and XGBoost produced the best results for feature extraction using DenseNet-121, with an accuracy rate of 81%.   Keywords: Classification, CNN Architecture, Feature Extraction, Ground Coverage, Vegetation Density.
THE EFFECT OF FACIAL ACCESSORY AUGMENTATION ON THE ACCURACY OF DEEP LEARNING-BASED FACIAL RECOGNITION SYSTEMS Hidayat, Ahmad Nur; Suciati, Nanik; Saikhu, Ahmad
JURTEKSI (jurnal Teknologi dan Sistem Informasi) Vol. 11 No. 3 (2025): Juni 2025
Publisher : Lembaga Penelitian dan Pengabdian Kepada Masyarakat (LPPM) STMIK Royal Kisaran

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.33330/jurteksi.v11i3.3846

Abstract

Abstract: Face recognition based on deep learning has become an important technology in many areas. However, these systems often face challenges in real-world conditions, such as when the face is partially covered by accessories such as masks or glasses. This study aims to evaluate the effect of data augmentation by adding facial accessories (masks, glasses, and a combination of both) and geometric augmentation on the accuracy of face recognition systems. There are three types of datasets used in this method: the original dataset (category 1), the dataset with facial accessories augmentation (category 2), and the dataset with geometric augmentation (category 3). Data augmentation was performed on the training dataset to increase diversity, followed by the face detection process using SCRFD and feature extraction with ArcFace. The model was then trained using Multi-Layer Perceptron (MLP). Based on the results, adding face accessories (category 2) made the model a lot more accurate, hitting 99% accuracy. In category 3, adding geometric features improved accuracy to 91%. Other evaluation metrics, such as precision, recall, and F1-score, also showed improvement after augmentation. This study concludes that facial accessories augmentation is more effective in improving the accuracy and robustness of face recognition models compared to geometric augmentation.Keywords: augmentation; deep learning; face recognition; glasses. Abstrak: Pengenalan wajah berbasis deep learning telah menjadi salah satu teknologi penting dalam berbagai aplikasi. Namun, sistem ini sering kali menghadapi tantangan dalam kondisi dunia nyata, seperti saat wajah tertutup sebagian oleh aksesori seperti masker atau kacamata. Penelitian ini bertujuan untuk mengevaluasi pengaruh augmentasi data dengan menambahkan aksesori wajah (masker, kacamata, dan kombinasi keduanya) serta augmentasi geometris terhadap akurasi sistem pengenalan wajah. Metode yang digunakan melibatkan tiga kategori dataset: dataset asli tanpa augmentasi (kategori 1), dataset dengan augmentasi aksesoris wajah (kategori 2), dan dataset dengan augmentasi geometris (kategori 3). Augmentasi data dilakukan pada dataset pelatihan untuk meningkatkan keberagaman, diikuti dengan proses deteksi wajah menggunakan SCRFD dan ekstraksi fitur dengan ArcFace. Model kemudian dilatih menggunakan Multi-Layer Perceptron (MLP). Hasil penelitian menunjukkan bahwa augmentasi aksesoris wajah (kategori 2) memberikan peningkatan signifikan pada akurasi model, mencapai 99%, sedangkan kategori 3 dengan augmentasi geometris mencapai akurasi 91%. Metrik evaluasi lainnya, seperti precision, recall, dan F1-score, juga menunjukkan peningkatan setelah augmentasi. Penelitian ini menyimpulkan bahwa augmentasi aksesoris wajah lebih efektif dalam meningkatkan akurasi dan ketahanan model pengenalan wajah dibandingkan dengan augmentasi geometris.Kata kunci: augmentasi; deep learning; kacamata; pengenalan wajah.
RadEval: A novel semantic evaluation framework for radiology report Tsaniya, Hilya; Fatichah, Chastine; Suciati, Nanik
International Journal of Advances in Intelligent Informatics Vol 11, No 4 (2025): November 2025
Publisher : Universitas Ahmad Dahlan

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.26555/ijain.v11i4.2151

Abstract

The evaluation of automatically generated radiology reports remains a critical challenge, as conventional metrics fail to capture the semantic, clinical, and contextual correctness required for automatic medical analysis. This study proposes RadEval, a semantic-aware evaluation framework, to assess the quality of generated radiology reports. This method integrates domain-specific knowledge and contextual embeddings to evaluate the quality of generated radiology reports using a four-level scoring system. Given a reference report and a predicted report from a radiology image, RadEval performs scoring evaluation by first extracting relevant medical entities using a fine-tuned biomedical NER model. These entities are normalized through ontology mapping using RadLex concept identifiers to resolve lexical variation. Then, semantically related entities were clustered using BioBERT's contextual embeddings to capture deeper semantic similarity. In addition, predicted abnormality tags are incorporated to weight clinically significant terms during score aggregation. The final semantic score reflects a weighted combination of exact match, ontology match, and contextual similarity, modulated by tag importance. Experiments were conducted on the MIMIC-CXR dataset, which contains over 200,000 report pairs. Comparative evaluations show that RadEval outperforms traditional metrics, achieving an F1-score of 0.69, compared to 0.56 for BERTScore. Using this method, a more precise clinical interpretation of the predicted report was captured from the reference report. These findings suggest that RadEval method provides a more accurate and clinically aligned framework for evaluating the medical report generation model.
FACIAL INPAINTING IN UNALIGNED FACE IMAGES USING GENERATIVE ADVERSARIAL NETWORK WITH FEATURE RECONSTRUCTION LOSS Avin Maulana; Chastine Fatichah; Nanik Suciati
JUTI: Jurnal Ilmiah Teknologi Informasi Vol. 18, No. 2, July 2020
Publisher : Department of Informatics, Institut Teknologi Sepuluh Nopember

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.12962/j24068535.v18i2.a1004

Abstract

Facial inpainting or face restoration is a process to reconstruct some missing region on face images such that the inpainting results still can be seen as a realistic and original image without any missing region, in such a way that the observer could not realize whether the inpainting result is a generated or original image. Some of previous researches have done inpainting using generative network, such as Generative Adversarial Network. However, some problems may arise when inpainting algorithm have been done on unaligned face. The inpainting result show spatial inconsistency between the reconstructed region and its adjacent pixel, and the algorithm fail to reconstruct some area of face. Therefore, an improvement method in facial inpainting based on deep-learning is proposed to reduce the effect of the stated problem before, using GAN with additional loss from feature reconstruction and two discriminators. Feature reconstruction loss is a loss obtained by using pretrained network VGG-Net, Evaluation of the result shows that additional loss from feature reconstruction loss and two type of discriminators may help to increase visual quality of inpainting result, with higher PSNR and SSIM than previous result.
MODIFIED LOCAL TERNARY PATTERN WITH CONVOLUTIONAL NEURAL NETWORK FOR FACE EXPRESSION RECOGNITION Syavira Tiara Zulkarnain; Nanik Suciati
JUTI: Jurnal Ilmiah Teknologi Informasi Vol. 19, No. 1, Januari 2021
Publisher : Department of Informatics, Institut Teknologi Sepuluh Nopember

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.12962/j24068535.v19i1.a1031

Abstract

Facial expression recognition (FER) on images with illumination variation and noises is a challenging problem in the computer vision field. We solve this using deep learning approaches that have been successfully applied in various fields, especially in uncontrolled input conditions. We apply a sequence of processes including face detection, normalization, augmentation, and texture representation, to develop FER based on Convolutional Neural Network (CNN). The combination of TanTriggs normalization technique and Adaptive Gaussian Transformation Method is used to reduce light variation. The number of images is augmented using a geometric augmentation technique to prevent overfitting due to lack of training data. We propose a representation of Modified Local Ternary Pattern (Modified LTP) texture image that is more discriminating and less sensitive to noise by combining the upper and lower parts of the original LTP using the logical AND operation followed by average calculation. The Modified LTP texture images are then used to train a CNN-based classification model. Experiments on the KDEF dataset show that the proposed approach provides a promising result with an accuracy of 81.15%.
IMPROVED LIP-READING LANGUAGE USING GATED RECURRENT UNITS Nafa Zulfa; Nanik Suciati; Shintami Chusnul Hidayati
JUTI: Jurnal Ilmiah Teknologi Informasi Vol. 19, No. 2, Juli 2021
Publisher : Department of Informatics, Institut Teknologi Sepuluh Nopember

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.12962/j24068535.v19i2.a1080

Abstract

Lip-reading is one of the most challenging studies in computer vision. This is because lip-reading requires a large amount of training data, high computation time and power, and word length variation. Currently, the previous methods, such as Mel Frequency Cepstrum Coefficients (MFCC) with Long Short-Term Memory (LSTM) and Convolutional Neural Network (CNN) with LSTM, still obtain low accuracy or long-time consumption because they use LSTM. In this study, we solve this problem using a novel approach with high accuracy and low time consumption. In particular, we propose to develop lip language reading by utilizing face detection, lip detection, filtering the amount of data to avoid overfitting due to data imbalance, image extraction based on CNN, voice extraction based on MFCC, and training model using LSTM and Gated Recurrent Units (GRU). Experiments on the Lip Reading Sentences dataset show that our proposed framework obtained higher accuracy when the input array dimension is deep and lower time consumption compared to the state-of-the-art.
DETECTION AND CLASSIFICATION OF RED BLOOD CELLS ABNORMALITY USING FASTER R-CNN AND GRAPH CONVOLUTIONAL NETWORKS Amirullah Andi Bramantya; Chastine Fatichah; Nanik Suciati
JUTI: Jurnal Ilmiah Teknologi Informasi Vol. 20, No. 1, January 2022
Publisher : Department of Informatics, Institut Teknologi Sepuluh Nopember

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.12962/j24068535.v19i3.a1118

Abstract

Research in medical imagery field such as analysis of Red Blood Cells (RBCs) abnormalities can be used to assist laboratory’s in determining further medical actions. Convolutional Neural Networks (CNN) is a commonly used method for the classification of RBCs abnormalities in blood cells images. However, CNN requires large number of labeled training data. A classification of RBCs abnormalities in limited data is a challenge. In this research we explore a semi-supervised learning using Graph Convolutional Networks (GCN) to classify RBCs abnormalities with limited number of labeled sample images. The proposed method consists of 3 stages, i.e., extraction of Region of Interest (ROI) of RBCs from blood images using Faster R-CNN, abnormality labeling and abnormality classification using GCN. The experiment was conducted on a publicly accessible blood sample image dataset to compare classification performance of pretrained CNN models (Resnet-101 and VGG-16) and GCN models (Resnet-101 + GCN and VGG-16 + GCN). The experiment showed that the GCN model build on VGG-16 features (VGG-16  + GCN) produced the best accuracy of 95%.
CLASSIFICATION OF LUNG AND COLON CANCER TISSUES USING HYBRID CONVOLUTIONAL NEURAL NETWORKS Chilyatun Nisa'; Nanik Suciati; Anny Yuniarti
JUTI: Jurnal Ilmiah Teknologi Informasi Vol. 22, No. 1, January 2024
Publisher : Department of Informatics, Institut Teknologi Sepuluh Nopember

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.12962/j24068535.v22i1.a1225

Abstract

Colon and lung cancers are two highly lethal kinds of cancer which can often coexist and pose a new challenge for accurate diagnosis. While research often concentrates on detecting a single cancer in a specific organ, this study proposes an innovative machine-learning approach to identify both colon and lung cancers. The objective is to create a hybrid machine learning classification model to enhance diagnostic precision. The LC25000 dataset comprises 25,000 color histopathological image samples of lung and colon cell tissues, indicating the presence or absence of cancer (adenocarcinoma). Image features are extracted using the pre-trained VGG-16 model. The cancer type is identified through three machine learning classification algorithms: Stochastic Gradient Descent (SGD), Random Forest (RF), and K-Nearest Neighbor (KNN). The model's evaluation employed a 10-fold cross-validation technique, with CNN-SGD exhibiting the highest performance based on evaluation metrics. On a scale of 0 to 100, it scored 99.8 for Area Under Curve (AUC) and 98.88 for Classification Accuracy (CA). CNN-RF, a model with performance closely following CNN-SGD, demonstrates training times 58.3 seconds faster than CNN-SGD. Meanwhile, CNN-KNN ranks last among the models evaluated in this study based on its F1, recall, AUC, and CA scores.
Evaluation of Synthetic Data Effectiveness using Generative Adversarial Networks (GAN) in Improving Javanese Script Recognition on Ancient Manuscript Muhammad 'Arif Faizin; Nanik Suciati; Chastine Fatichah
JUTI: Jurnal Ilmiah Teknologi Informasi Vol. 23, No. 1, January 2025
Publisher : Department of Informatics, Institut Teknologi Sepuluh Nopember

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.12962/j24068535.v23i1.a1256

Abstract

The imbalance of Javanese script data in ancient manuscript recognition poses a challenge due to the limited availability of data. A potential approach to addressing this issue is the use of Generative Adversarial Networks (GAN). This study evaluates the effectiveness of synthetic data generated using Enhanced Balancing GAN (EBGAN) in mitigating data imbalance. Various evaluation scenarios are conducted, including: (i) assessing the impact of syn-thetic data as augmentation, (ii) evaluating the sufficiency of synthetic data for recognition models, (iii) analyzing minority class oversampling with different selection strategies, and (iv) evaluating model generalization through cross-validation. Quantitative analysis of the generated synthetic data, based on Fréchet Inception Distance (FID) and Structural Similarity Index (SSIM), as well as visual assessment, indicates that the quality of synthetic data closely resembles real data. Additionally, experimental results demonstrate that combining real and synthetic data improves accuracy, precision, recall, and F1-score. The oversampling strategy for synthetic data has proven effective in meeting the data sufficiency requirements for training recognition models. Meanwhile, selecting minority classes and determining threshold values based on percentage, distribution, and model performance in oversampling can serve as guidelines for enhancing script recognition performance. Compared to previous methods, the use of EBGAN has been shown to produce more diverse synthetic data with better visual quality. However, further research is still needed to optimize GAN performance in supporting script recognition.
Co-Authors Adhira Riyanti Amanda Adni Navastara, Dini Agus Eko Minarno Agus Priyono Agus Zainal Arifin Agus Zainal Arifin Ahmad Saikhu Ahmad Syauqi Ahmad Syauqi Akwila Feliciano Akwila Feliciano Akwila Feliciano Pradiptatmaka Alam Ar Raad Stone Aldinata Rizky Revanda Altriska Izzati Khairunnisa Hermawan Amelia Devi Putri Ariyanto Amirullah Andi Bramantya Andika Rahman Teja Anny Yuniarti Antonius Kevin Wiguna Ardian Yusuf Wicaksono Ari Wijayanti Aris Fanani Arrie Kurniawardhani Arsy Bilahi Tama Ary Mazharuddin Shiddiqi Arya Yudhi Wijaya Atika Faradina Randa Atikah, Luthfi Avin Maulana Awangditama, Bangun Rizki Ayu Kardina Sukmawati Ayu Septya Maulani Baso, Budiman Bryan Nandriawan Bui, Ngoc Dung Chastine Fatichah Chastine Fatichah Chilyatun Nisa' Damayanti, Putri Daniel Sugianto Darlis Herumurti Davin Masasih Diana Purwitasari Dimas Rahman Oetomo Dini Adni Navastara Dini Adni Navastara, Dini Adni Dion Devara Aryasatya Eko Prasetyo Eva Yulia Puspaningrum Evelyn Sierra Fairuuz Azmi Firas Faishal Azka Jellyanto Faizin, Muhammad 'Arif Fajar Astuti Hermawati Fandy Kuncoro Adianto Fandy Kuncoro Adianto Febri Liantoni, Febri Fiqey Indriati Eka Sari Fitri Bimantoro Ginardi, R.V. Hari Glenaya Gou Koutaki Gurat Adillion, Ilham Hafidz, Abdan Handayani Tjandrasa Handayani Tjandrasa Hani Ramadhan Haq, Arinal Hidayat, Ahmad Nur Hidayati, Shintami Chusnul Hilya Tsaniya Imagine Clara Arabella Imam Kuswardayan Imam Mustafa Kamal Irawan Rahardja, Agustinus Aldi Isye Arieshanti Isye Arieshanti Januar Adi Putra Januar Adi Putra Kautsar, Faiz Keiichi Uchimura Kevin Christian Hadinata Kevin Christian Hadinata M. Bahrul Subkhi Maulidan Bagus A.R Maulidiya, Erika Mawaddah, Saniyatul MIFTAHOL ARIFIN, MIFTAHOL Mochammad Zharif Asyam Marzuqi Muchamad Kurniawan Muchamad Kurniawan Muchamad Kurniawan, Muchamad Muhamad Nasir Muhammad 'Arif Faizin Muhammad Alif Satriadhi Muhammad Farih Muhammad Fikri Sunandar Mutmainnah Muchtar Nafa Zulfa Ni Luh Made ITS Novrindah Alvi Hasanah R Dimas Adityo R. Dimas Adityo Rachman, Rudy Rahma Fida Fadhilah Rangga Kusuma Dinata Rangga Kusuma Dinata Rayssa Ravelia Rizal A Saputra Rizal A Saputra, Rizal A Rohman Dijaya Romario Wijaya Safhira Maharani Safhira Maharani Salim Bin Usman Salim Bin Usman Salsabiil Hasanah Sarimuddin, Sarimuddin Septiana, Nuning Sherly Rosa Anggraeni Sherly Rosa Anggraeni Shintami Chusnul Hidayati Shofiya Syidada Sjahrunnisa, Anita Suastika Yulia Riska Sugianela, Yuna Surya Fadli Alamsyah Syavira Tiara Zulkarnain Tanzilal Mustaqim Tiara Anggita Tiara Anggita Tsaniya, Hilya Wahyu Saputra, Vriza Wan Sabrina Mayzura Wibowo, Della Aulia Wicaksono, Farhan Wijayanti Nurul Khotimah Yulia Niza Yulia Niza Yuna Sugianela Yuna Sugianela Yuslena Sari, Yuslena Yuwanda Purnamasari Pasrun Zakiya Azizah Cahyaningtyas Zakiya Azizah Cahyaningtyas