Rice productivity strongly depends on early detection of leaf diseases, while manual identification is often delayed and subjective. This study investigates the use of lightweight CNN architectures MobileNetV3-Large and EfficientNet-B0 based on transfer learning to classify six rice leaf disease classes, namely bacterial leaf blight, brown spot, healthy, leaf blast, leaf scald, and narrow brown spot. The dataset is obtained from Kaggle and consists of 2,628 images with a balanced class distribution, stratified into training, validation, and test sets with a ratio of 80%:10%:10%. The images are resized to 224×224 pixels and data augmentation was applied to the training set. Pretrained ImageNet weights are first used as frozen feature extractors, followed by partial fine-tuning of the last 30% backbone layers, with custom classification layers trained using the Adam optimizer with an early stopping mechanism. Model performance is evaluated using accuracy, precision, recall, F1-score, and confusion matrices, while computational efficiency is assessed based on parameter count and inference speed measured in frames per second. The results show that under partial fine-tuning MobileNetV3-Large achieves 95.83% test accuracy and 95.80% macro F1-score with 3.12 million parameters, while EfficientNet-B0 obtains 93.18% accuracy and 93.02% macro F1-score with 4.21 million parameters. Both models achieve inference speeds above 50 frames per second, suggesting their potential suitability for deployment on resource-constrained devices. Bootstrap analysis suggests the performance gap is clear in the frozen stage but becomes less conclusive after partial fine-tuning. Overall, MobileNetV3-Large provides the best trade-off between accuracy and efficiency for rice leaf disease classification.