This study focuses on implementing Optical Character Recognition (OCR) using the Tesseract engine, integrated with bounding box detection, to extract nutritional information from food nutrition labels. The research addresses the challenge of limited consumer access to and understanding of nutritional data, a factor contributing to health issues such as obesity and related metabolic disorders. Studies indicate that although Indonesian consumers generally have a good level of knowledge and positive attitudes toward nutritional labels, the actual behavior of reading and understanding these labels remains limited. Additionally, packaged foods consumed outside the home constitute a significant portion of daily caloric intake, which can lead to health complications if not properly managed. With obesity levels among adults in Indonesia rising to concerning rates, this study highlights the importance of providing accessible nutritional data. In this work, MobileNetV1 is used as the backbone model for bounding box detection, effectively identifying and isolating label regions to enhance OCR accuracy. Tesseract OCR, known for its LSTM-based architecture, is applied to predict sequential data patterns, such as rows of text on nutrition labels. Preprocessing techniques, including grayscale conversion, brightness adjustment, CLAHE (Contrast Limited Adaptive Histogram Equalization), and denoising, are used to improve text clarity and further refine OCR output accuracy. Post-processing steps involve rule-based and contextual error correction to handle common OCR inaccuracies. Evaluated on 10 different label images, the system achieved a maximum Word Error Rate (WER) of 10% and a Character Error Rate (CER) of 1.6%, demonstrating high accuracy in nutritional information extraction.