This study evaluates the performance of two classification algorithms, namely Naive Bayes and Support Vector Machine (SVM), in identifying voice commands in financial applications for the blind. The data used has gone through a preprocessing process including tokenization, stemming, and stopword removal, and was extracted using the TF-IDF method. The models were trained using a data sharing scheme of 80% for training and 20% for testing, then evaluated based on accuracy, precision, recall, and F1-score. The test results show that both models achieve a very high level of accuracy, with Naive Bayes achieving an accuracy of 98.6% and SVM reaching 98.4%. Both show high precision, recall, and F1-score in each voice command category, with the highest value in the "QRIS Payment" category which achieved a precision and recall of 1.00. Confusion matrix analysis shows that classification errors occur in minimal amounts. This study also shows that TF-IDF as a feature extraction technique is effective in improving speech recognition accuracy by giving more weight to relevant and rarely appearing words in the dataset, which helps the model to focus more on the most important information. With these results, both algorithms are proven to be effective in recognizing voice commands. However, Naive Bayes is slightly superior in accuracy, so it is more recommended for voice-based applications in digital financial systems. These findings support the development of more inclusive and accessible technology for the visually impaired.