Visually impaired individuals often struggle to independently identify Indonesian rupiah denominations because banknotes share similar colors, patterns, and layouts, increasing the risk of errors during cash transactions. Purpose: This study aims to develop an offline, image-based banknote denomination recognition system on Android that can provide accessible assistance without relying on internet connectivity. Methodology: A quantitative experimental design was applied using a dataset of 4,340 banknote images covering 14 classes (seven denominations with front and back sides). The classifier was built with MobileNetV2 using transfer learning, supported by data augmentation and hyperparameter optimization, and evaluated using validation accuracy and F1-score. The trained model was converted to TensorFlow Lite and integrated into a Flutter-based Android application with text-to-speech output for user assistance. Findings: The proposed model achieved 96.20% validation accuracy with an average F1-score of 0.95, indicating strong performance for lightweight on-device inference. Implications: The system can be deployed in real time on smartphones to support inclusive and safer cash handling for visually impaired users, and it demonstrates the feasibility of offline deep learning for accessible financial technology. Originality: This study provides an end-to-end offline solution for Indonesian rupiah recognition that combines a lightweight deep learning model, on-device deployment, and text-to-speech feedback while distinguishing both sides of multiple denominations, offering practical value beyond approaches that depend on cloud inference or limited class coverage.