Subarmaniam Kannan
Faculty of Information Science and Technology, Multimedia University, Selangor,

Published : 2 Documents Claim Missing Document
Claim Missing Document
Check
Articles

Found 2 Documents
Search

Smart Farm-Care using a Deep Learning Model on Mobile Phones Mercelin Francis; Kalaiarasi Sonai Muthu Anbananthen; Deisy Chelliah; Subarmaniam Kannan; Sridevi Subbiah; Jayakumar Krishnan
Emerging Science Journal Vol 7, No 2 (2023): April
Publisher : Ital Publication

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.28991/ESJ-2023-07-02-013

Abstract

Deep learning and its models have provided exciting solutions in various image processing applications like image segmentation, classification, labeling, etc., which paved the way to apply these models in agriculture to identify diseases in agricultural plants. The most visible symptoms of the disease initially appear on the leaves. To identify diseases found in leaf images, an accurate classification system with less size and complexity is developed using smartphones. A labeled dataset consisting of 3171 apple leaf images belonging to 4 different classes of diseases, including the healthy ones, is used for classification. In this work, four variants of MobileNet models - pre-trained on the ImageNet database, are retrained to diagnose diseases. The model’s variants differ based on their depth and resolution multiplier. The results show that the proposed model with 0.5 depth and 224 resolution performs well - achieving an accuracy of 99.6%. Later, the K-means algorithm is used to extract additional features, which helps improve the accuracy to 99.7% and also measures the number of pixels forming diseased spots, which helps in severity prediction. Doi: 10.28991/ESJ-2023-07-02-013 Full Text: PDF
The Eye: A Light Weight Mobile Application for Visually Challenged People Using Improved YOLOv5l Algorithm Kalaiarasi Sonai Muthu Anbananthen; Sridevi Subbiah; Subiksha Gayathri Baskar; Ratchana Selvaraj; Jayakumar Krishnan; Subarmaniam Kannan; Deisy Chelliah
Emerging Science Journal Vol 7, No 5 (2023): October
Publisher : Ital Publication

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.28991/ESJ-2023-07-05-011

Abstract

The eye is an essential sensory organ that allows us to perceive our surroundings at a glance. Losing this sense can result in numerous challenges in daily life. However, society is designed for the majority, which can create even more difficulties for visually impaired individuals. Therefore, empowering them and promoting self-reliance are crucial. To address this need, we propose a new Android application called “The Eye” that utilizes Machine Learning (ML)-based object detection techniques to recognize objects in real-time using a smartphone camera or a camera attached to a stick. The article proposed an improved YOLOv5l algorithm to improve object detection in visual applications. YOLOv5l has a larger model size and captures more complex features and details, leading to enhanced object detection accuracy compared to smaller variants like YOLOv5s and YOLOv5m. The primary enhancement in the improved YOLOv5l algorithm is integrating L1 and L2 regularization techniques. These techniques prevent overfitting and improve generalization by adding a regularization term to the loss function during training. Our approach combines image processing and text-to-speech conversion modules to produce reliable results. The Android text-to-speech module is then used to convert the object recognition results into an audio output. According to the experimental results, the improved YOLOv5l has higher detection accuracy than the original YOLOv5 and can detect small, multiple, and overlapped targets with higher accuracy. This study contributes to the advancement of technology to help visually impaired individuals become more self-sufficient and confident. Doi: 10.28991/ESJ-2023-07-05-011 Full Text: PDF