Azad, Ruhan Bevi
Unknown Affiliation

Published : 2 Documents Claim Missing Document
Claim Missing Document
Check
Articles

Found 2 Documents
Search

A novel YOLOv8 architecture for human activity recognition of occluded pedestrians Rajakumar, Shaamili; Azad, Ruhan Bevi
International Journal of Electrical and Computer Engineering (IJECE) Vol 14, No 5: October 2024
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/ijece.v14i5.pp5244-5252

Abstract

Perception is difficult in video surveillance applications because of the presence of dynamic objects and constant environmental changes. This problem worsens when bad weather, including snow, rain, fog, dark nights, and bright daylight, interferes with the quality of perception. The proposed work aims to enhance the accuracy of camera-based perception for human activity detection in video surveillance during adverse weather conditions. To identify primary human activities, including walking on the road during severe weather, transfer learning from many adverse conditions using real-time images or videos has been proposed as an improvement for you look only once v8 (YOLOv8)-based human activity recognition in poor weather conditions. We collected and sorted training rates into frames from videos depicting human walking activity, their combined forms, and other subgroups, such as running and standing, based on their characteristics. The assessment of the detection efficiency of the previously described images and subgroups led to a comparison of the training weights. The use of real-time activity images for training greatly enhanced the detection performance when comparing the proposed test results to the existing YOLO base weights. Furthermore, a notable improvement in human activity efficiency was obtained by utilizing extra images and feature-related combinations of data techniques.
Two-scale decomposition and deep learning fusion for visible and infrared images Azad, Ruhan Bevi; Unnikrishnan, Hari; Gopinath, Lokesh
International Journal of Electrical and Computer Engineering (IJECE) Vol 15, No 2: April 2025
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/ijece.v15i2.pp1593-1601

Abstract

The paper focuses on the fusion of visible and infrared images to generate composite images that preserve both the thermal radiation information from the infrared spectrum and the detailed texture from the visible spectrum. The proposed approach combines traditional methods, such as two-scale decomposition, with deep learning techniques, specifically employing an autoencoder architecture. The source images are subjected to two-scale decomposition, which extracts high-frequency detail and low-frequency base information. Additionally, an algorithmic unravelling technique establishes a logical connection between deep neural networks and traditional signal processing algorithms. The model consists of two encoders for decomposition and a decoder after the unravelling operation. During testing, a fusion layer merges the decomposed feature maps, and the decoder generates the fused image. Evaluation metrics including entropy, average gradient, spatial frequency and standard deviation are employed to subjectively assess fusion quality. The proposed approach demonstrates promise for effectively combining visible and infrared imagery for various applications.