This study proposes a hybrid neural network that integrates a multilayer perceptron (MLP) with optimised Sugeno-type fuzzy reasoning for object classification. The system employs a vertically mounted array of ultrasonic sensors arranged 10 cm apart at heights ranging from 80 cm to 180 cm. Each sensor measures the distance of passing objects, producing eleven readings that capture vertical distance patterns. These readings are processed by an MLP with a single hidden layer of 22 neurones to identify characteristic spatial signatures. A refined similarity-based classification is then performed using an optimised Sugeno-type fuzzy inference system configured with five linguistic variables: Very Low (VL), Low (L), Medium (M), High (H), and Very High (VH). Training and testing were conducted using datasets collected at SLBN-A Citeureup, Cimahi, comprising two object categories: human (visually impaired individuals) and nonhuman (inanimate objects). The model was trained for 100 epochs with a learning rate of 0.001. Experimental results show accuracy exceeding 90%, with the hybrid model outperforming the conventional MLP by 1.83%. This improvement reduces false positives and prevents erroneous obstacle warnings. The integration of fuzzy reasoning also enhances the system's robustness to uncertainty and stabilises decision-making when class boundaries overlap.