Breast cancer remains one of the leading causes of mortality among women worldwide, emphasizing the need for accurate and efficient diagnostic tools. Ultrasound imaging is widely used for breast lesion screening due to its affordability and safety, yet manual interpretation often suffers from variability and subjectivity. Recent advancements in deep learning, particularly the YOLO (You Only Look Once) family, have demonstrated strong potential for real-time medical image detection and segmentation. This study aims to compare the performance of YOLOv8m-seg and YOLOv11m-seg models in detecting and segmenting breast lesions from ultrasound images to determine which model offers a better balance between accuracy, sensitivity, and computational efficiency. Two public ultrasound datasets were employed to ensure data diversity and evaluation fairness. Both models were trained under identical preprocessing, augmentation, and hyperparameter settings using 640×640 input resolution and the AdamW optimizer. Model performance was evaluated through Precision, Recall, F1-score, mAP@0.5, mAP@0.5:0.95, Mask Precision, and Inference Time metrics. The experimental results show that YOLOv11m-seg outperformed YOLOv8m-seg in precision (0.859), mask accuracy (0.859), and inference time (16.7 ms), while YOLOv8m-seg maintained slightly higher recall (0.736). YOLOv11m-seg demonstrated stronger generalization across heterogeneous datasets and superior boundary segmentation. YOLOv11m-seg achieved the best overall performance and is more suitable for real-time clinical applications. This study contributes empirical benchmarks for future Computer-Aided Diagnosis (CAD) development and highlights the potential of modern YOLO architectures in improving breast ultrasound lesion detection accuracy and efficiency.