p-Index From 2021 - 2026
5.639
P-Index
Claim Missing Document
Check
Articles

Evaluation of Synthetic Data Effectiveness using Generative Adversarial Networks (GAN) in Improving Javanese Script Recognition on Ancient Manuscript Faizin, Muhammad 'Arif; Suciati, Nanik; Fatichah, Chastine
JUTI: Jurnal Ilmiah Teknologi Informasi Vol. 23, No. 1, January 2025
Publisher : Department of Informatics, Institut Teknologi Sepuluh Nopember

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.12962/j24068535.v23i1.a1256

Abstract

The imbalance of Javanese script data in ancient manuscript recognition poses a challenge due to the limited availability of data. A potential approach to addressing this issue is the use of Generative Adversarial Networks (GAN). This study evaluates the effectiveness of synthetic data generated using Enhanced Balancing GAN (EBGAN) in mitigating data imbalance. Various evaluation scenarios are conducted, including: (i) assessing the impact of syn-thetic data as augmentation, (ii) evaluating the sufficiency of synthetic data for recognition models, (iii) analyzing minority class oversampling with different selection strategies, and (iv) evaluating model generalization through cross-validation. Quantitative analysis of the generated synthetic data, based on Fréchet Inception Distance (FID) and Structural Similarity Index (SSIM), as well as visual assessment, indicates that the quality of synthetic data closely resembles real data. Additionally, experimental results demonstrate that combining real and synthetic data improves accuracy, precision, recall, and F1-score. The oversampling strategy for synthetic data has proven effective in meeting the data sufficiency requirements for training recognition models. Meanwhile, selecting minority classes and determining threshold values based on percentage, distribution, and model performance in oversampling can serve as guidelines for enhancing script recognition performance. Compared to previous methods, the use of EBGAN has been shown to produce more diverse synthetic data with better visual quality. However, further research is still needed to optimize GAN performance in supporting script recognition.
Automated Facial Wrinkle Segmentation for Dermatological Assessment Using VGG-Based U-Net with Hybrid Augmentation Setiawan, Wahyu Fajar; Suciati, Nanik
Jurnal Teknik Informatika (Jutif) Vol. 7 No. 2 (2026): JUTIF Volume 7, Number 2, April 2026
Publisher : Informatika, Universitas Jenderal Soedirman

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.52436/1.jutif.2026.7.2.5561

Abstract

Manual and automated facial wrinkle segmentation remains challenging due to the fine-grained nature of wrinkles, uneven distribution across facial regions, severe class imbalance (~2% wrinkle pixels), and sensitivity to lighting variations—limiting the reliability of existing dermatological assessment tools. This study aims to evaluate VGG transfer learning with hybrid augmentation strategies for U-Net-based automated facial wrinkle segmentation. Using the FFHQ-Wrinkle dataset comprising 1,000 manually annotated high-resolution images (1024×1024 pixels), this study systematically evaluates three U-Net variants (Baseline, VGG16-based, VGG19-based) across four augmentation strategies: no augmentation, hierarchical image enhancement (CLAHE, gamma correction, bilateral filtering), geometric transformation (rotation, translation, shear, zoom, flip), and hybrid combination. A multi-component loss function integrating Focal Loss, Dice Loss, IoU Loss, and Boundary Loss addresses class imbalance while optimizing both region overlap and edge localization. The proposed VGG19-based U-Net with hybrid augmentation achieves state-of-the-art performance: Dice coefficient of 0.6585, IoU of 0.4970, precision of 0.6186, recall of 0.7344, and Boundary F1 of 0.9185. Key findings demonstrate that VGG19 transfer learning provides +21.54% Dice improvement over Baseline U-Net with 12.7-fold reduction in overfitting, while hybrid augmentation yields +4.87% Dice improvement with +2.24% synergistic gain beyond individual strategies. This research advances automated dermatological tools for precise skin health assessment, reducing subjectivity in clinical evaluations and providing actionable guidelines for practitioners developing automated wrinkle analysis systems.  
Improving Vegetation Encroachment Detection in Powerline Areas Using EfficientNet-Based U-Net Semantic Segmentation Jannah, Alissa Velia Royhatul; Suciati, Nanik
Jurnal Teknik Informatika (Jutif) Vol. 7 No. 2 (2026): JUTIF Volume 7, Number 2, April 2026
Publisher : Informatika, Universitas Jenderal Soedirman

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.52436/1.jutif.2026.7.2.5863

Abstract

Vegetation growing beyond safe limits has the potential to pose a threat to safety and the reliability of overhead powerlines, as well as cause financial losses for infrastructure providers. Identifying potential obstructions to overhead powerlines is crucial for addressing these issues. This study proposes the EFF-UNET semantic segmentation technique on the VEPL dataset to identify areas of overlap between vegetation and overhead powerlines by overlaying the two models. Visually, overhead powerlines have a thin pixel structure and are difficult to distinguish from the background or vegetation, whereas the feature extraction process in the U-Net encoder can degrade small objects due to progressive resolution loss. Modifications to the encoder in the baseline U-Net architecture utilize the EfficientNet family by comparing variants B0 through B7 to produce the best model. EfficientNet specifically employs compound scaling to optimize the network’s resolution, depth, and width during feature extraction, thereby preserving information integrity during downsampling. Experimental results demonstrate the superiority of EfficientNetB7 through a measured trade-off compared to other models, where for vegetation segmentation, this model achieves an IoU of 0.9824, Accuracy of 0.9905, Dice of 0.9911, and Loss of 0.0089. Meanwhile, for powerline segmentation, the results show an IoU of 0.9153, Accuracy of 0.9978, Dice of 0.9558, and Loss of 0.0442. Based on these findings, EFF-UNET model successfully addresses the shortcomings of conventional models in preserving feature representation. This model is capable of improving the performance of vegetation and overhead powerlines segmentation to produce precise encroachment areas, thereby enabling accurate on-site infrastructure inspections.
Co-Authors Adhira Riyanti Amanda Adni Navastara, Dini Agus Eko Minarno Agus Priyono Agus Zainal Arifin Agus Zainal Arifin Ahmad Saikhu Ahmad Syauqi Ahmad Syauqi Akwila Feliciano Akwila Feliciano Akwila Feliciano Pradiptatmaka Alam Ar Raad Stone Aldinata Rizky Revanda Altriska Izzati Khairunnisa Hermawan Amelia Devi Putri Ariyanto Amirullah Andi Bramantya Andika Rahman Teja Anny Yuniarti Antonius Kevin Wiguna Ardian Yusuf Wicaksono Ari Wijayanti Aris Fanani Arrie Kurniawardhani Arsy Bilahi Tama Ary Mazharuddin Shiddiqi Arya Yudhi Wijaya Atika Faradina Randa Atikah, Luthfi Avin Maulana Awangditama, Bangun Rizki Ayu Kardina Sukmawati Ayu Septya Maulani Baso, Budiman Bryan Nandriawan Bui, Ngoc Dung Chastine Fatichah Chastine Fatichah Chilyatun Nisa' Damayanti, Putri Daniel Sugianto Darlis Herumurti Davin Masasih Diana Purwitasari Dimas Rahman Oetomo Dini Adni Navastara Dini Adni Navastara, Dini Adni Dion Devara Aryasatya Eko Prasetyo Eva Yulia Puspaningrum Evelyn Sierra Fairuuz Azmi Firas Faishal Azka Jellyanto Faizin, Muhammad 'Arif Fajar Astuti Hermawati Fandy Kuncoro Adianto Fandy Kuncoro Adianto Febri Liantoni, Febri Fiqey Indriati Eka Sari Fitri Bimantoro Ginardi, R.V. Hari Glenaya Gou Koutaki Gurat Adillion, Ilham Hafidz, Abdan Handayani Tjandrasa Handayani Tjandrasa Hani Ramadhan Haq, Arinal Hidayat, Ahmad Nur Hidayati, Shintami Chusnul Hilya Tsaniya Imagine Clara Arabella Imam Kuswardayan Imam Mustafa Kamal Irawan Rahardja, Agustinus Aldi Isye Arieshanti Isye Arieshanti Jannah, Alissa Velia Royhatul Januar Adi Putra Januar Adi Putra Kautsar, Faiz Keiichi Uchimura Kevin Christian Hadinata Kevin Christian Hadinata M. Bahrul Subkhi Maulidan Bagus A.R Maulidiya, Erika Mawaddah, Saniyatul MIFTAHOL ARIFIN, MIFTAHOL Mochammad Zharif Asyam Marzuqi Muchamad Kurniawan Muchamad Kurniawan Muchamad Kurniawan, Muchamad Muhamad Nasir Muhammad Alif Satriadhi Muhammad Farih Muhammad Fikri Sunandar Mutmainnah Muchtar Nafa Zulfa Ni Luh Made ITS Novrindah Alvi Hasanah R Dimas Adityo R. Dimas Adityo Rachman, Rudy Rahma Fida Fadhilah Rangga Kusuma Dinata Rangga Kusuma Dinata Rayssa Ravelia Rizal A Saputra Rizal A Saputra, Rizal A Rohman Dijaya Romario Wijaya Safhira Maharani Safhira Maharani Salim Bin Usman Salim Bin Usman Salsabiil Hasanah Sarimuddin, Sarimuddin Septiana, Nuning Setiawan, Wahyu Fajar Sherly Rosa Anggraeni Sherly Rosa Anggraeni Shintami Chusnul Hidayati Shofiya Syidada Sjahrunnisa, Anita Suastika Yulia Riska Sugianela, Yuna Surya Fadli Alamsyah Syavira Tiara Zulkarnain Tanzilal Mustaqim Tiara Anggita Tiara Anggita Tsaniya, Hilya Vriza Wahyu Saputra Wan Sabrina Mayzura Wibowo, Della Aulia Wicaksono, Farhan Wijayanti Nurul Khotimah Yulia Niza Yulia Niza Yuna Sugianela Yuna Sugianela Yuslena Sari, Yuslena Yuwanda Purnamasari Pasrun Zakiya Azizah Cahyaningtyas Zakiya Azizah Cahyaningtyas