Object detection has played a crucial role in Advanced Driver Assistance Systems (ADAS) applications, particularly with integrating deep learning techniques. These advancements have improved ADAS applications by enabling more precise object identification, thereby enhancing real-time decision-making. Object detection models can be categorized into two main groups: two-stage and one-stage models. While prior studies reveal that one-stage detectors generally achieve higher frames per second (FPS) at the expense of some accuracy, they remain better suited for real-time ADAS applications. Our study aims to analyze the performance of an object detection model created using SSD-MobileNet, a one-stage detector approach. We focused on identifying road-related objects such as vehicles, and traffic signs. The contribution of our work lies in developing an object detection model using a pre-trained SSD-MobileNet and employing transfer learning. This process involves introducing a new fully connected layer tailored for the specific identification of objects in road scenes. The retraining of the SSD-MobileNet model is executed through GPU-accelerated transfer learning on the MS COCO dataset, incorporating appropriate pre-processing to exclusively include road-related objects. Our findings indicate promising results for the retrained SSD-MobileNet model, achieving an F1 score of 0.801, and a Mean Average Precision (mAP) of 65.41 at 71 FPS. A comparative analysis with other one-stage and two-stage detectors demonstrates the model's performance, surpassing some existing works in the literature related to road object detection. Notably, our model exhibits improved mAP while maintaining a higher FPS, rendering it more apt for ADAS applications.