Ounejjar, Youssef
Unknown Affiliation

Published : 2 Documents Claim Missing Document
Claim Missing Document
Check
Articles

Found 2 Documents
Search

Enhancement of YOLOv5 for automatic weed detection through backbone optimization Habib, Mohammed; Sekhra, Salma; Tannouche, Adil; Ounejjar, Youssef
IAES International Journal of Artificial Intelligence (IJ-AI) Vol 14, No 1: February 2025
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/ijai.v14.i1.pp658-666

Abstract

In the context of our research project, which involves developing a robotic system capable of eliminating weeds using deep learning technics, the selection of powerful object detection model is essential. Object detectors typically consist of three components: backbone, neck, and prediction head. In this study, we propose an enhancement to the you only look once version 5 (YOLOv5) network by using the most popular convolutional neural networks (CNN) networks (such as DarkNet and MobileNet) as backbones. The objective of this study is to identify the best backbone that can improve YOLOv5 's performance while preserving its other layers (neck and head). In terms of detecting and ultra-localizing pea crops. Additionally, we compared their results with those of the most commonly used object detectors. Our findings indicate that the fastest models among the networks studied were MobileNet, YOLO-tiny, and YOLOv5, with speeds ranging from 5 to 14 milliseconds per image. Among these models, MobileNetv1 demonstrated the highest accuracy, achieving average precision (AP) score of 89.3% for intersection over union (IoU) threshold of 0.5. However, the accuracy of this model decreased when we increased the threshold, suggesting that it does not provide perfect crop delineation. On the other hand, while YOLOv5 had a lower AP score than MobileNetv1 at an IoU threshold of 0.5, it exhibited greater stability when faced with variations in this threshold.
Comparative analysis of convolutional neural network architectures for poultry meat classification Salma, Sekhra; Habib, Mohammed; Tannouche, Adil; Ounejjar, Youssef
IAES International Journal of Artificial Intelligence (IJ-AI) Vol 14, No 5: October 2025
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/ijai.v14.i5.pp3715-3723

Abstract

The increasing demand for standardized food quality assurance, particularly in regions like Morocco, emphasizes the need for accurate classification of poultry meat. This study evaluates and compares ten convolutional neural network (CNN) architectures—VGG19, VGG16, ResNet50, GoogleNet, MobileNetV1, MobileNetV2, DenseNet, NasNet, EfficientNet, and AlexNet—for classifying commonly consumed poultry meat types in Moroccan markets, including chicken, turkey, fayoumi, and farmer’s chicken. A labeled image dataset was used to train and test each model, with performance assessed using metrics such as accuracy, precision, recall, training time, and computational complexity. Additionally, the study investigates how dataset size influences model performance, addressing challenges like limited data availability and scalability. The results highlight DenseNet as the top-performing architecture, achieving 98% classification accuracy while also demonstrating superior computational efficiency. These findings are valuable for improving food quality control, offering data-driven support for stakeholders in poultry production, distribution, and regulatory bodies. By identifying optimal deep learning models for poultry meat classification, the study contributes to enhancing food authentication and safety in Morocco and similar regions. It also encourages the integration of AI-driven systems in food inspection processes, providing scalable, accurate, and efficient solutions for ensuring standardized quality in the poultry supply chain.