Iris recognition is widely acknowledged as one of the most reliable biometric modalities due to its high uniqueness, rich textural patterns, and long-term stability. Unlike other biometric traits, iris characteristics resist forgery, aging effects, and environmental variations, making it suitable for high-security applications. Recently, convolutional neural networks (CNNs) have been extensively applied in iris recognition to improve feature representation and classification accuracy. However, many CNN-based approaches still depend on conventional segmentation and handcrafted features, which reduce robustness under noisy data, illumination variations, occlusions, or unconstrained environments. To address these limitations, this study proposes an enhanced iris identification framework combining a modified T-Net for precise segmentation with deep residual feature extraction for improved discrimination. Unlike conventional systems focus mainly on classification, the proposed approach emphasizes segmentation-driven feature consistency, ensuring extracted features originate from accurately localized iris regions. This design enhances stability and reliability, particularly under challenging imaging conditions. The framework leverages transfer learning and efficient representation learning strategies, enabling high accuracy even with a limited labelled data. Evaluations on three benchmark datasets CASIA-IrisV4, IITD Iris Database, and UBIRIS.v2 covering both controlled and less-constrained acquisition scenarios. Results show that it achieves classification accuracy of up to 98.35%, while maintaining computational efficiency suitable for deployment. The proposed architecture offers a robust, data-efficient, and scalable solution for secure biometric authentication, with strong potential for real-world applications such as access control, identity verification, and high-security authentication systems.
Copyrights © 2026