Claim Missing Document
Check
Articles

Found 2 Documents
Search

Classification Of Plants By Their Fruits And Leaves Using Convolutional Neural Networks Irhebhude, Martins E.; Kolawole, Adeola O.; Chinyio, Chat
Science in Information Technology Letters Vol 5, No 1 (2024): May 2024
Publisher : Association for Scientific Computing Electronics and Engineering (ASCEE)

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.31763/sitech.v5i1.1364

Abstract

The population growth of the world is exponential, this makes it imperative that we have an increase in food production. In this light, farmers, industries and researchers are struggling with identifying and classifying food plants. Over the years, there have been challenges that come with identifying fruits manually. It is time-consuming, labour intensive and requires experts to identify fruits because of the similarity in fruit’s leaves (citrus family), shapes, sizes and colour. A computerized detection technique is needed for the classification of fruits. Existing solutions to fruits classifications are majorly based on fruit or leave used as input. A new model using Convolutional Neural Network (CNN) is proposed for fruits classification. A dataset of 5 classes of fruits and fresh dry leaves plants (Mango, African almond, Guava, Avocado and Cashew) comprising of 1000 images each. The proposed model hyperparameters were: Conv2D layer, activation layer, dense layer, a learning and dropout rates of 0.001 and 0.5 respectively were used for the experiment. Various performances for accuracies of 91%, 97%, 78% and 97% were obtained for proposed model on local dataset, proposed model on benchmark dataset, benchmark model on local dataset and benchmark model on benchmark dataset. The proposed model is robust on both local and benchmark datasets and can be used for effective classification of plants
Human Action Recognition in Military Obstacle Crossing Using HOG and Region-Based Descriptors Kolawole, Adeola O.; Irhebhude, Martins E.; Odion , Philip O.
Journal of Computing Theories and Applications Vol. 2 No. 3 (2025): JCTA 2(3) 2025
Publisher : Universitas Dian Nuswantoro

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.62411/jcta.12195

Abstract

Human action recognition involves recognizing and classifying actions performed by humans. It has many applications, including sports, healthcare, and surveillance. Challenges such as a limited number of classes of activities and variations within inter and intra-class groups lead to high misclassification rates in some of the intelligent systems developed. Existing studies focused mainly on using public datasets with little focus on real-life action datasets, with limited research on HAR for military obstacle-crossing activities.  This paper focuses on recognizing human actions in an obstacle-crossing competition video sequence where multiple participants are performing different obstacle-crossing activities. This study proposes a feature descriptor approach that combines a Histogram of Oriented Gradient and Region Descriptors (HOGReG) for human action recognition in a military obstacle crossing competition. The dataset was captured during military trainees’ obstacle-crossing exercises at a military training institution to achieve this objective. Images were segmented into background and foreground using a Grabcut-based segmentation algorithm, and thereafter, features were extracted and used for classification. The features were extracted using a Histogram of Oriented Gradient (HOG) and region descriptors from segmented images. The extracted features are presented to a neural network classifier for classification and evaluation. The experimental results recorded 63.8%, 82.6%, and 86.4% recognition accuracies using the region descriptors HOG and HOGReG, respectively. The region descriptor gave a training time of 5.6048 seconds, while HOG and HOGReG reported 32.233 and 31.975 seconds, respectively. The outcome shows how effectively the suggested model performed.