Visual-based food classification and recipe recommendation systems remain underexplored in the context of local culinary traditions. To address this gap, a system was developed using the EfficientNetB1 architecture of Convolutional Neural Networks (CNN), integrated with a Large Language Model (LLM) to generate South Sumatran recipes from food images, adapting suggestions to classification results. The model was trained using transfer learning on eight food ingredient classes selected for their prevalence in local cuisine. It achieved a validation accuracy of 98.2% and a test accuracy of 98%, with average precision, recall, and F1-score all exceeding 98%, indicating consistent and reliable performance. The system was deployed as a web-based application, DapoerKito, allowing users to upload food images, receive classification results, and obtain generated recipe suggestions. LLM-generated recipes are produced instantly, matched to ingredients, and shown in a clear format. These findings demonstrate the value of integrating computer vision and language generation in an AI-based platform that supports usability and cultural relevance. In addition to its technical capabilities, the system contributes to the digital preservation of regional culinary heritage through interactive AI. This CNN–LLM integration offers a novel approach for advancing food AI with diverse ingredients, personalized nutrition, and multilingual support.
Copyrights © 2025