Background of study:  Blood cell analysis is vital for diagnosing medical conditions, but traditional manual methods are laborious and error-prone. Deep learning, especially YOLO models, offers automated solutions for medical image analysis. However, the real-world effectiveness of the latest YOLOv11 in blood cell detection is not thoroughly investigated, as general object detection improvements may not translate to biomedical images due to their unique characteristics.Aims and scope of paper: This study systematically compares YOLOv10 and YOLOv11 on a public blood cell detection dataset to assess if YOLOv11's advancements provide tangible benefits for blood cell classification. The goal is to identify the most effective model for accurate and efficient detection in microscopic images, guiding AI-driven diagnostic tool selection.Methods: Both models were trained and tested under identical conditions using the Kaggle Blood Cell Detection Dataset (RBCs, WBCs, Platelets). Images were resized to 640x640 pixels. Performance metrics included mAP (mAP@50 and mAP@50–95), Precision, Recall, F1-score, speed, model complexity, and training time.Result: YOLOv11n consistently showed higher accuracy (mAP50: 0.9279 vs. 0.9120; mAP50-95: 0.6524 vs. 0.6347), particularly for RBCs and WBCs. However, YOLOv11n had longer inference (11.35 ms/image) and postprocessing times (8.64 ms/image) compared to YOLOv10n (7.00 ms/image and 0.90 ms/image). YOLOv11n trained faster (0.311 hours vs. 0.375 hours), with a smaller model size (5.5 MB vs. 5.8 MB), fewer parameters, and reduced computational complexity.Conclusion: YOLOv11n offers superior accuracy and improved training efficiency, making it suitable for medical image object detection where precision is paramount. The increased inference and postprocessing times indicate a performance-speed trade-off. Model selection should balance these factors based on deployment context.
                        
                        
                        
                        
                            
                                Copyrights © 2025