Claim Missing Document
Check
Articles

Found 3 Documents
Search

Classification Of Plants By Their Fruits And Leaves Using Convolutional Neural Networks Irhebhude, Martins E.; Kolawole, Adeola O.; Chinyio, Chat
Science in Information Technology Letters Vol 5, No 1 (2024): May 2024
Publisher : Association for Scientific Computing Electronics and Engineering (ASCEE)

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.31763/sitech.v5i1.1364

Abstract

The population growth of the world is exponential, this makes it imperative that we have an increase in food production. In this light, farmers, industries and researchers are struggling with identifying and classifying food plants. Over the years, there have been challenges that come with identifying fruits manually. It is time-consuming, labour intensive and requires experts to identify fruits because of the similarity in fruit’s leaves (citrus family), shapes, sizes and colour. A computerized detection technique is needed for the classification of fruits. Existing solutions to fruits classifications are majorly based on fruit or leave used as input. A new model using Convolutional Neural Network (CNN) is proposed for fruits classification. A dataset of 5 classes of fruits and fresh dry leaves plants (Mango, African almond, Guava, Avocado and Cashew) comprising of 1000 images each. The proposed model hyperparameters were: Conv2D layer, activation layer, dense layer, a learning and dropout rates of 0.001 and 0.5 respectively were used for the experiment. Various performances for accuracies of 91%, 97%, 78% and 97% were obtained for proposed model on local dataset, proposed model on benchmark dataset, benchmark model on local dataset and benchmark model on benchmark dataset. The proposed model is robust on both local and benchmark datasets and can be used for effective classification of plants
Human Action Recognition in Military Obstacle Crossing Using HOG and Region-Based Descriptors Kolawole, Adeola O.; Irhebhude, Martins E.; Odion , Philip O.
Journal of Computing Theories and Applications Vol. 2 No. 3 (2025): JCTA 2(3) 2025
Publisher : Universitas Dian Nuswantoro

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.62411/jcta.12195

Abstract

Human action recognition involves recognizing and classifying actions performed by humans. It has many applications, including sports, healthcare, and surveillance. Challenges such as a limited number of classes of activities and variations within inter and intra-class groups lead to high misclassification rates in some of the intelligent systems developed. Existing studies focused mainly on using public datasets with little focus on real-life action datasets, with limited research on HAR for military obstacle-crossing activities.  This paper focuses on recognizing human actions in an obstacle-crossing competition video sequence where multiple participants are performing different obstacle-crossing activities. This study proposes a feature descriptor approach that combines a Histogram of Oriented Gradient and Region Descriptors (HOGReG) for human action recognition in a military obstacle crossing competition. The dataset was captured during military trainees’ obstacle-crossing exercises at a military training institution to achieve this objective. Images were segmented into background and foreground using a Grabcut-based segmentation algorithm, and thereafter, features were extracted and used for classification. The features were extracted using a Histogram of Oriented Gradient (HOG) and region descriptors from segmented images. The extracted features are presented to a neural network classifier for classification and evaluation. The experimental results recorded 63.8%, 82.6%, and 86.4% recognition accuracies using the region descriptors HOG and HOGReG, respectively. The region descriptor gave a training time of 5.6048 seconds, while HOG and HOGReG reported 32.233 and 31.975 seconds, respectively. The outcome shows how effectively the suggested model performed.
Hybrid Dynamic Programming Healthcare Cloud-Based Quality of Service Optimization Sitlong, Nengak I.; Evwiekpaefe, Abraham E.; Irhebhude, Martins E.
Journal of Computing Theories and Applications Vol. 3 No. 2 (2025): in progress
Publisher : Universitas Dian Nuswantoro

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.62411/jcta.14455

Abstract

The integration of Internet of Things (IoT) with cloud computing has revolutionized healthcare systems, offering scalable and real-time patient monitoring. However, optimizing response times and energy consumption remains crucial for efficient healthcare delivery. This research evaluates various algorithmic approaches for workload migration and resource management within IoT cloud-based healthcare systems. The performance of the implemented algorithm in this research, Hybrid Dynamic Programming and Long Short-Term Memory (Hybrid DP+LSTM), was analyzed against other six key algorithms, namely Gradient Optimization with Back Propagation to Input (GOBI), Deep Reinforcement Learning (DRL), improved GOBI (GOBI2), Predictive Offloading for Network Devices (POND), Mixed Integer Linear Programming (MILP), and Genetic Algorithm (GA) based on their average response time and energy consumption. Hybrid DP+LSTM achieves the lowest response time (82.91ms) with an energy consumption of 2,835,048 joules per container. The outcome of the analysis showed that Hybrid DP+LSTM have significant response times improvement, with percentage increases of 89.3%, 79.0%, 83.8%, 97.0%, 99.8%, and 99.94% against GOBI, GOBI2, DRL, POND, MILP, and GA, respectively. In terms of energy consumption, Hybrid DP+LSTM outperforms other approaches, with GOBI2 (3,664,337 joules) consuming 29.3% more energy, DRL (2,973,238 joules) consuming 4.9% more, GOBI (4,463,010 joules) consuming 57.4% more, POND (3,310,966 joules) consuming 16.8% more, MILP (3,005,498 joules) consuming 6.0% more, and the GA (3,959,935 joules) consuming 39.7% more. The result of ablation of the Hybrid DP+LSTM model achieves a 47.05% improvement over DP-only (156.57ms) and a 70.64% improvement over LSTM-only (282.41ms) in response time. On the energy efficiency side, Hybrid DP+LSTM shows 22.80% improvement over LSTM-only (3,671,51 joules), but 7.34% underperformance compared to DP-only (2,640,93). These research findings indicate that the Hybrid DP+LSTM technique provides the best trade-off between response time and energy efficiency. Future research should further explore hybrid approaches to optimize these metrics in IoT cloud-based healthcare systems.