Drug–drug interaction (DDI) extraction from biomedical text is central to pharmacovigilance but remains challenging in resource-constrained clinical environments. While large language models have shown promise, their computational cost and deployment complexity limit practical adoption. This study systematically reviews the role of small language models (SLMs) for DDI extraction and examines their effectiveness, efficiency, and deployability. A systematic literature review was conducted following PRISMA guidelines, covering empirical studies published between 2020 and 2025 in PubMed, IEEE Xplore, ACM and  SpringerLink. Eligible studies were analysed with respect to model architectures, datasets, evaluation metrics, and deployment considerations. Quality assessment was applied to ensure methodological robustness. The synthesis indicates that SLM-based approaches, including CNN-, LSTM-, and lightweight transformer models, can achieve competitive F1-scores on benchmark DDI datasets while requiring substantially fewer computational resources than large language models. However, performance varies across datasets, and real-world clinical evaluations remain limited. These findings support the feasibility of deploying SLM-based DDI extraction systems in resource-constrained clinical and pharmacovigilance settings and provide a baseline for future benchmarking and comparative research in clinical natural language processing.