Poultry farming represents one of the fastest growing sectors in global food production, yet disease outbreaks, high mortality, and labor shortages continue to threaten its sustainability. Conventional health monitoring methods based on visual inspection are time-consuming, subjective, and inadequate for early anomaly detection. In response, computer vision and deep learning have emerged as transformative tools for livestock management. While prior implementations of the YOLO object detection family, such as YOLOv5 and YOLOv8, have achieved notable success, their performance often deteriorates in dense flocks, low-light conditions, and occlusion-prone environments. This study introduces a YOLOv9-assisted vision framework tailored for poultry health assessment in commercial farm settings. The system integrates smart cameras with edge computing to enable real-time detection of behavioral and physiological anomalies without dependence on high-bandwidth or cloud-based resources. A dataset of 903 annotated poultry images, categorized into healthy and sick classes, was employed for model development. The trained model achieved 88.7% precision, 97% recall, an F1-score of 0.82, and a mAP@0.5 of 0.88, demonstrating robustness under variable illumination, bird occlusion, and high-density environments. Comparative evaluation confirmed that YOLOv9 provides a superior balance of accuracy, generalization, and computational efficiency relative to YOLOv8–YOLOv11, supporting practical deployment on edge devices. Limitations include the binary scope of health classification and reliance on a single dataset. Future directions involve extending the framework to multi-class disease recognition, cross-dataset validation, behavior-based temporal modeling, and multimodal fusion, advancing predictive analytics and welfare-oriented poultry farming.
Copyrights © 2026