Claim Missing Document
Check
Articles

Found 2 Documents
Search
Journal : Scientific Journal of Informatics

Siamese Neural Networks with Chi-square Distance for Trademark Image Similarity Detection Suyahman; Sunardi; Murinto; Arfiani Nur Khusna
Scientific Journal of Informatics Vol. 11 No. 2: May 2024
Publisher : Universitas Negeri Semarang

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.15294/sji.v11i2.4654

Abstract

Purpose: The objective of this study is to address the limitations of existing trademark image similarity analysis methods by integrating a Chi-square distance metric within a Siamese neural network framework. Traditional approaches using Euclidean distance often fail to accurately capture the complex visual features of trademarks, leading to suboptimal performance in distinguishing similar trademarks. This research aims to improve the precision and robustness of trademark comparison by leveraging the Chi-square distance, which is more sensitive to image variations. Methods: The approach involves modifying a Siamese neural network traditionally employing Euclidean distance with the use the Chi-square distance metric instead. This alteration allows the network to better capture and analyze critical visual features such as color and texture. The modified network is trained and tested on a comprehensive dataset of trademark images, enabling the network to learn and distinguish between similar and dissimilar trademarks based on subtle visual cues. Result: The findings from this study show a significant increase in accuracy, with the modified network achieving an accuracy rate of 98%. This marks a notable improvement over baseline models that utilize Euclidean distance, demonstrating the effectiveness of the Chi-square distance metric in enhancing the model's ability to discriminate between trademarks. Novelty: The novelty of this research lies in its application of the Chi-square distance in a deep learning framework specifically for trademark image similarity detection, presenting a novel approach that yields higher precision in image-based comparisons.
Comparative Analysis of CNN Architectures in Siamese Networks with Test-Time Augmentation for Trademark Image Similarity Detection Suyahman; Sunardi; Murinto
Scientific Journal of Informatics Vol. 11 No. 4: November 2024
Publisher : Universitas Negeri Semarang

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.15294/sji.v11i4.13811

Abstract

Purpose: This study aims to enhance the detection of trademark image similarity by conducting a comparative analysis of various Convolutional Neural Network (CNN) architectures within Siamese networks, integrated with test-time augmentation techniques. Existing methods often face challenges in accurately capturing subtle visual similarities between trademarks due to limitations in feature extraction and generalization capabilities. The research seeks to identify the most effective CNN architecture for this task and to assess the impact of test-time augmentation on model performance. Methods: The study implements Siamese networks utilizing three distinct CNN architectures: VGG16, VGG19, and ResNet50. Each network is trained on a dataset of trademark images to learn deep feature representations that can discriminate between similar and dissimilar trademarks. During the evaluation phase, test-time augmentation (TTA) is applied to enhance model robustness by averaging predictions over multiple augmented versions of the input images. TTA includes transformations such as random rotations (up to 40%), width and height shifts (up to 20%), random shear transformations, zooming (up to 20%), horizontal and vertical flips, and random brightness adjustments. Result: Experimental findings reveal that the Siamese network based on VGG19 achieves the highest accuracy at 98.82%, outperforming the VGG16-based network with an accuracy of 97.07% and the ResNet50-based network with 50.00% accuracy. The application of TTA has improved performance across all models, with the VGG19 model receiving the highest improvement. The extremely low accuracy of ResNet50 can be attributed to its misinterpretation of original trademark images as close-forged ones, probably due to overfitting or lack of an efficient ability in generalizing very fine visual features. Novelty: The study conducted a comparative analysis of CNN architectures, namely VGG16, VGG19, and ResNet50 in Siamese networks for trademark image similarity detection.