Deep learning has advanced intelligent systems for plant identification; however, distinguishing edible wild plants remains challenging due to limited datasets and the need for contextual information beyond visual classification. This study develops a Convolutional Neural Network (CNN) framework that integrates metadata as a decision support system to enhance food safety and strengthen community-based food security. A dataset of 16,076 images across 34 classes of edible wild plants was collected and enriched with metadata containing plant descriptions, consumption status, and nutritional values. The dataset was split into 75% training, 20% validation, and 5% testing to ensure reliable evaluation. The proposed solution employs InceptionV3 with transfer learning as the primary model, chosen for its ability to capture complex visual features in limited datasets, while MobileNetV3-Large serves as a lightweight comparative architecture. Results show that InceptionV3 achieved superior performance with a test accuracy of 0.87 and F1-score of 0.88, whereas MobileNetV3-Large obtained only 0.03 accuracy, indicating poor generalization. This highlights the importance of selecting architectures with sufficient depth for domains characterized by high visual variability. Metadata integration enhanced the system’s role as a decision support tool, providing contextual information such as edibility status and nutritional content. The novelty of this research lies in combining CNN-based classification with metadata integration, transforming the system into a practical framework for safe consumption decisions. Limitations include the dataset containing only edible plants. Future work should incorporate non-edible classes, evaluate performance under real-world conditions, and explore advanced architectures and explainable AI techniques to improve robustness, transparency, and accessibility.