Conventional robotic surgical systems, while offering enhanced dexterity and 3D visualization, suffer from a critical limitation: the absence of tactile sensation. This sensory disconnect can lead to inadvertent tissue damage from excessive force application and complicates delicate maneuvers that rely on the surgeon's sense of touch. This research proposes and validates a novel surgical robotic system architecture designed to bridge this sensory gap by integrating high-fidelity 3D visual input with accurate, real-time force feedback from tactile sensors mounted on the end-effector. To rigorously evaluate this innovation, a structured comparative methodology was employed. A cohort of surgeons performed standardized surgical tasks, including suturing and tissue manipulation, on realistic soft-tissue phantoms. The performance of a conventional (visual-only) system was benchmarked against that of the proposed (visual-haptic) system. A comprehensive dataset was collected, which included objective metrics such as task completion time, precision deviation from the ideal tool path, and the magnitude of applied forces. Concurrently, subjective evaluations from the participating surgeons were gathered to assess perceived control, cognitive workload, and overall task confidence. The test data revealed statistically significant improvements when using the visual-haptic system. Participants not only completed tasks with greater speed and accuracy but also applied considerably lower and more consistent forces. The analysis underscores that haptic feedback, enabled by advanced sensor fusion, not only restores a crucial 'sense of touch' to the surgeon but also reduces the incidence of excessive force application, potentially minimizing tissue trauma and improving patient recovery. These findings confirm the hypothesis that haptic-visual integration constitutes a new paradigm in robotic surgery, shifting the paradigm from purely visual guidance to a more intuitive, multi-sensory surgical experience. This study also discusses future challenges and opportunities, including the potential for AI-driven partial autonomy, such as creating virtual safety boundaries or automating sub-tasks, and the development of next-generation sensor technologies to further enhance clinical outcomes.