The development of digital image processing and machine learning enables automated and objective plant phenotyping, reducing reliance on manual observations that are time-consuming and subjective. This study aims to classify Arabidopsis thaliana leaf conditions into three classes, namely Healthy, Senescent, and Anthocyanin-Rich, using a Convolutional Neural Network (CNN) based on top-view images from the public Quantitative Plant and Zenodo datasets. A total of 1,500 images were used, representing diverse variations in leaf color, pigmentation levels, and visual conditions. The images were processed through several preprocessing stages, including resizing, pixel normalization, data augmentation, and stratified dataset splitting to maintain class balance. A custom CNN model was developed and trained to automatically extract visual features from leaf images, and its performance was evaluated using accuracy, confusion matrix, precision, recall, and F1-score metrics. Experimental results indicate that the model achieved an overall accuracy of 82%, with the best performance observed in the Healthy and Senescent classes. However, the Anthocyanin-Rich class still exhibited classification errors due to visual similarities with other classes. These findings demonstrate the potential of CNN-based approaches to support automated plant phenotyping, although further improvements are required to enhance model generalization and classification accuracy for visually similar classes.
Copyrights © 2026