Blind people have difficulty in navigating due to the limited sensing they are capable of. In this research, we design a stick tool that can distinguish objects in the form of humans, animals and inanimate object based on camera. Processing is carried out with the Raspberry Pi with a webcam camera as input and indicators in the form of a buzzer and vibrator. The feature extraction process is carried out by deep learning using the tensorflow library and image processing using the Single Shot MultiBox Detector (SSD) method. Tests were carried out on human objects, animals (cats), and inanimate objects (chairs and tables) for indoor and outdoor conditions and obtained an accuracy of 92%, a sensitivity of 83%, and a specificity of 100%.