Image deduplication is a critical task in domains such as digital asset management, content-based image retrieval (CBIR), and data storage optimization. This paper presents a novel method for improving deduplication accuracy by integrating multiple feature types. A comprehensive framework is proposed that combines visual, semantic, and structural image elements. The system employs deep learning architectures, including convolutional neural networks (CNNs) and transformers, to extract high-level features, which are fused through an adaptive weighting mechanism that dynamically adjusts based on image content. Experimental results across diverse datasets demonstrate that the proposed multi-feature fusion approach significantly outperforms traditional single-feature methods, achieving an average improvement of 15% in deduplication accuracy. By overcoming limitations in handling complex visual similarities, this study introduces a more robust and efficient solution for image deduplication.
Copyrights © 2025