Artificial intelligence has revolutionized computational diagnostics, however deploying reliable intelligent systems in extreme low-resource environments remains a critical structural challenge in health informatics. Conventional deep learning architectures, such as standard Convolutional Neural Networks (CNNs), are inherently data-hungry, making them prone to severe overfitting and catastrophic generalization failures when applied to rare biological pathologies. To overcome this limitation, we propose an Optimized Contrastive MobileNetV2 architecture embedded within a Few-Shot Learning (FSL) framework. By mathematically modifying the latent space representation using a contrastive loss function, the proposed model learns discriminative metric distances rather than relying on massive raw feature memorization. To rigorously validate the algorithm, we utilize a highly constrained dataset comprising merely 120 biological pathogen samples as a cross-domain proxy testbed, accurately simulating the extreme visual complexity and data scarcity typical of rare medical diagnostic scenarios. Extensive episodic evaluations demonstrate that the proposed methodology significantly outperforms conventional baselines. Under a 10-shot learning paradigm, the contrastive architecture achieved a macro-averaged accuracy of 89.2% and an F1-Score of 89.3%, remaining statistically robust against stochastic variations (p < 0.001). Furthermore, the integration of depthwise separable convolutions restricts the model complexity to approximately 3.4 × 10^6 parameters. Crucially, empirical evaluations confirm that this framework occupies merely 13.5 MB of physical storage and achieves an ultra-low inference latency of 12.5 ms per image. Ultimately, this study establishes a highly transferable, computationally efficient algorithmic model ready for seamless integration into intelligent clinical decision support systems and remote edge-computing health architectures.