The rapid growth of urban populations necessitates the development of intelligent systems to manage city infrastructure effectively. This study presents a real-time object detection framework designed to enhance surveillance and traffic management in smart city environments. Leveraging deep learning-based models, specifically optimized versions of YOLO (You Only Look Once), the system detects and classifies vehicles, pedestrians, and other urban entities from live video streams. The proposed method integrates edge computing for low-latency inference, enabling timely decision-making in scenarios such as traffic flow optimization, pedestrian safety, and anomaly detection. Experiments were conducted using publicly available urban datasets and real-time feeds from city surveillance cameras. The results demonstrate high detection accuracy (mAP > 85%) with inference speeds exceeding 30 FPS on edge devices, proving its suitability for deployment in resource-constrained environments. This work contributes to the ongoing advancement of intelligent urban infrastructure by providing a scalable and efficient solution for real-time object perception in smart cities.
Copyrights © 2025