Claim Missing Document
Check
Articles

Found 2 Documents
Search
Journal : JOIV : International Journal on Informatics Visualization

Offline Handwriting Writer Identification using Depth-wise Separable Convolution with Siamese Network Suteddy, Wirmanto; Agustini, Devi Aprianti Rimadhani; Atmanto, Dastin Aryo
JOIV : International Journal on Informatics Visualization Vol 8, No 1 (2024)
Publisher : Society of Visual Informatics

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.62527/joiv.8.1.2148

Abstract

Offline handwriting writer identification has significant implications for forensic investigations and biometric authentication. Handwriting, as a distinctive biometric trait, provides insights into individual identity. Despite advancements in handcrafted algorithms and deep learning techniques, the persistent challenges related to intra-variability and inter-writer similarity continue to drive research efforts. In this study, we build on well-separated convolution architectures like the Xception architecture, which has proven to be robust in our previous research comparing various deep learning architectures such as MobileNet, EfficientNet, ResNet50, and VGG16, where Xception demonstrated minimal training-validation disparities for writer identification. Expanding on this, we use a model based on similarity or dissimilarity approaches to identify offline writers' handwriting, known as the Siamese Network, that incorporates the Xception architecture. Similarity or dissimilarity measurements are based on the Manhattan or L1 distance between representation vectors of each input pair. We train publicly available IAM and CVL datasets; our approach achieves accuracy rates of 99.81% for IAM and 99.88% for CVL. The model was evaluated using evaluation metrics, which revealed only two error predictions in the IAM dataset, resulting in 99.75% accuracy, and five error predictions for CVL, resulting in 99.57% accuracy. These findings modestly surpass existing achievements, highlighting the potential inherent in our methodology to enhance writer identification accuracy. This study underscores the effectiveness of integrating the Siamese Network with depth-wise separable convolution, emphasizing the practical implications for supporting writer identification in real-world applications.
End-To-End Evaluation of Deep Learning Architectures for Off-Line Handwriting Writer Identification: A Comparative Study Suteddy, Wirmanto; Agustini, Devi Aprianti Rimadhani; Adiwilaga, Anugrah; Atmanto, Dastin Aryo
JOIV : International Journal on Informatics Visualization Vol 7, No 1 (2023)
Publisher : Society of Visual Informatics

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.30630/joiv.7.1.1293

Abstract

Identifying writers using their handwriting is particularly challenging for a machine, given that a person’s writing can serve as their distinguishing characteristic. The process of identification using handcrafted features has shown promising results, but the intra-class variability between authors still needs further development. Almost all computer vision-related tasks use Deep learning (DL) nowadays, and as a result, researchers are developing many DL architectures with their respective methods. In addition, feature extraction, usually accomplished using handcrafted algorithms, can now be automatically conducted using convolutional neural networks. With the various developments of the DL method, it is necessary to evaluate the suitable DL for the problem we are aiming at, namely the classification of writer identification. This comparative study evaluated several DL architectures such as VGG16, ResNet50, MobileNet, Xception, and EfficientNet end-to-end to examine their advantages to offline handwriting for writer identification problems with IAM and CVL databases. Each architecture compared its respective process to the training and validation metrics accuracy, demonstrating that ResNet50 DL had the highest train accuracy of 98.86%. However, Xception DL performed slightly better due to the convergence gap for validation accuracy compared to all the other architectures, which were 21.79% and 15.12% for IAM and CVL. Also, the smallest gap of convergence between training and validation accuracy for the IAM and CVL datasets were 19.13% and 16.49%, respectively. The results of these findings serve as the basis for DL architecture selection and open up overfitting problems for future work.