Visual impairment presents significant challenges in daily navigation and environmental interaction, impacting personal independence and safety. This research addresses these challenges by developing "AllScan," a mobile application designed as a real-time visual assistant for the visually impaired. The system leverages the You Only Look Once version 11 (YOLOv11) architecture, a state-of-the-art object detection framework renowned for its balance of speed and accuracy, making it highly suitable for on-device implementation. To optimize performance, a comparative study was conducted by fine-tuning two model variants, YOLOv11n and YOLOv11m, on a specialized dataset specifically curated from Open Images V7. This dataset comprises 20 common object classes, with 1,000 images per class, and was used to evaluate the models under three distinct experimental conditions. The application, developed using the Flutter framework, processes a live camera feed, performs on-device inference with a TensorFlow Lite model, and provides auditory feedback via a Text-to-Speech (TTS) engine, enabling users to identify detected objects through real-time sound cues. Experimental results demonstrate that the fine-tuned YOLOv11m model, trained without data augmentation, achieved superior performance, scoring a mean Average Precision (mAP50) of 76.2% (a metric for general detection accuracy) and an mAP50-95 of 57.8% (a stricter metric for precise object localization). The final application provides a robust and efficient solution that can demonstrably enhance situational awareness and independence for visually impaired individuals in real-world environments.