Object detection is one of the substantial tasks in computer vision and has a wide range of applications ranging from autonomous driving to monitoring systems. This study presents a comparative analysis of vehicle detection approaches, contrasting traditional methods (OpenCV contour analysis and Haar Cascade) with modern deep learning-based you only look once version 8 (YOLOv8) and its variants. Vehicles were identified and localized within video frames using bounding boxes, with performance assessed through accuracy, F1-score, mean average precision (mAP), and inference speed. YOLOv8 consistently achieved superior accuracy (up to 98% in specific scenarios) and real-time processing speeds (155 FPS), confirming its suitability for safety-critical applications such as intelligent transport systems and autonomous navigation. However, its higher computational and memory demands highlight deployment trade-offs, where lighter variants like YOLOv8s remain feasible for embedded or low-power devices. In contrast, Haar Cascade and contour analysis offered faster execution and smaller memory footprints but lacked robustness under complex environmental conditions. The study also acknowledges limitations such as dataset bias, adverse weather effects, and scalability challenges, which may impact generalization in real-world deployments. By analyzing these trade-offs, the work provides essential insights to guide practitioners in selecting suitable vehicle detection solutions across diverse application environments.