The safety of connected and autonomous vehicles requires intelligent systems capable of detecting suspicious behaviors in real time while providing clear explanations to human operators. This paper presents an innovative framework for the autonomous and explainable detection of suspicious activities around connected vehicles, combining multi-sensor vision, multi-agent reinforcement learning (MARL), and explainable artificial intelligence (XAI). The system relies on lightweight deep learning models (YOLO-tiny, MobileNet) for perception, along with spatio-temporal reasoning to identify abnormal events such as prolonged parking, restricted area crossings, or the placement of suspicious objects. Cooperative decision-making between vehicles and roadside units (RSUs) is managed through MARL. In parallel, an XAI module generates visual and textual explanations to enhance transparency and user trust. The framework has been implemented and evaluated in simulation (CARLA, SUMO/Veins) and on embedded platforms (Jetson Nano/Orin). Results demonstrate an F1-score of 0.91, real-time performance at 7.5 FPS, and a 40% reduction in false positives, confirming the robustness of the proposed system for the cyber-physical security of intelligent transportation systems.
Copyrights © 2026