Accurate fertile-infertile egg classification is crucial to improve hatching productivity and sorting efficiency. This study proposes MobileFusionV3, a MobileNetV3 architecture enriched with CBAM (Convolutional Block Attention Module) and Hybrid Texture Fusion (LBP and GLCM) to combine deep and texture features to be more robust to candling illumination variations. A dataset of 1,275 candling images (675 fertile, 600 infertile) was subjected to preprocessing (resizing, normalization, background enhancement) and realistic data augmentation (rotation, brightness/contrast changes, Gaussian noise, illumination variations). The model was trained using transfer learning, early stopping, and an evaluation scheme based on accuracy, precision, recall, F1-score, and AUC. The test results showed an accuracy of 97.2%, precision of 96.8%, recall of 97.5%, F1 of 97.1%, and AUC of 0.99, surpassing previous designs that did not use attention mechanisms and texture fusion. Grad-CAM++ analysis confirms the model's focus on physiologically relevant regions (embryonic shadow and air-cell), thus improving the reliability of interpretation. These findings indicate that lightweight, efficient designs based on attention and texture fusion have the potential to be implemented in smart hatchery systems and edge/mobile devices while maintaining high accuracy.