Cynthia Sherin
SRM Institute of Science and Technology

Published : 1 Documents Claim Missing Document
Claim Missing Document
Check
Articles

Found 1 Documents
Search

Regional feature learning using attribute structural analysis in bipartite attention framework for vehicle re-identification Cynthia Sherin; Kayalvizhi Jayavel
International Journal of Electrical and Computer Engineering (IJECE) Vol 13, No 5: October 2023
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/ijece.v13i5.pp5824-5832

Abstract

Vehicle re-identification identifies target vehicles using images obtained by numerous non-overlapping real-time surveillance cameras. The effectiveness of re-identification is further challenging because of illumination changes, pose differences of captured images, and resolution. Fine-grained appearance changes in vehicles are recognized in addition to the coarse-grained characteristics like color of the vehicle along with model, and other custom features like logo stickers, annual service signs, and hangings to overcome these challenges. To prove the efficiency of our proposed bipartite attention framework, a novel dataset called Attributes27 which has 27 labelled attributes for each class are created. Our framework contains three major sections: The first section where the overall and semantic characteristics of every individual vehicle image are extracted by a double branch convolutional neural network (CNN) layer. Secondly, to identify the region of interests (ROIs) each branch has a self-attention block linked to it. Lastly to extract the regional features from the obtained ROIs, a partition-alignment block is deployed. The results of our proposed system’s evaluation on the Attributes27 and VeRi-776 datasets has highlighted significant regional attributes of each vehicle and improved the accuracy. Attributes27 and VeRi-776 datasets exhibits 98.5% and 84.3% accuracy respectively which are comparatively higher than the existing methods with 78.6% accuracy.