One of the most serious issues in the current predictive modelling is overfitting, especially with the increase in the complexity and size of machine learning systems. This research proposes regularisation-based methods as critical methods of enhancing generalisation and avoiding noise memorization in training data by models. The systematic literature review of the study that includes 2006-2025 research provides the synthesis of classical regularization methods, including L1/L2 penalties, Elastic Net, dropout, and early stopping, and emerging methods, including probabilistic dropout variants, Bayesian regularization, adaptive regularizers, and hybrid frameworks. The review points to the importance of the regularisation in the context of increasing the performance of generalisation, enhancing robustness to noisy or finite-sized datasets, stabilising optimization dynamics, and interpretability in high dimensional computations. It also determines the major shortcomings in the extant research such as, lack of comprehension on implicit regularisation, the cross domain comparative assessment and the requirement of adaptive and automatic strategies of regularisation. The paper ends with the research recommendations and open directions of research that are meant to enhance the theory, diagnostic tools, and towards the practitioners to effective regularisation configurations under different data regimes. Altogether, this paper gives an integrative and holistic approach to regularisation as a core building block of constructing credible, robust and general predictive models.