Incidents of bags being left behind in public facilities such as transportation hubs, offices, and educational environments continue to pose security challenges, especially when monitoring relies solely on human operators. To address the limitations of manual CCTV observation, this study presents an automated system capable of identifying abandoned bags by integrating the YOLOv11n detection model with the DeepSORT tracking algorithm. The dataset used consists of 1000 annotated bag images, combined with a pre-trained YOLOv11 human detector. Prior to training, image preprocessing and augmentation were applied to ensure that the model remained robust under varying illumination, distance, and viewpoint conditions. Model training was carried out in Google Colab using PyTorch with 20 epochs, a learning rate of 0.002, and a batch size of 8. Experimental results indicate that YOLOv11n delivers strong detection performance, achieving a mAP@0.5 of 0.787, a precision score of 0.837, a recall of 0.690, and an F1-Score of 0.755. When combined with DeepSORT, the system operates efficiently in real time, reaching an average of 28.30 FPS with a latency of 35.34 ms per frame. The system effectively distinguishes bags that are separated from their owners through correlation analysis between human and bag movements. Overall, the proposed approach is capable of supporting real-time surveillance needs, although future enhancement of dataset diversity and adaptive thresholding is recommended to improve detection in more complex environments.