This study presents a comprehensive theoretical review of the foundations that underpin modern intelligent computing systems, integrating perspectives from statistical learning theory, computational learning theory, optimization theory, information theory, probabilistic modeling, neural computation, and cognitive as well as bio-inspired approaches. Using a systematic review methodology supported by structured search strings and rigorous data extraction, the study identifies core theoretical constructs including VC dimension, PAC learning, sample complexity, entropy, mutual information, Bayesian inference, convergence principles, and universal approximation that collectively shape the development, capabilities, and limitations of intelligent systems. The analysis reveals how these theories complement one another in addressing challenges related to generalization, learnability, optimization efficiency, uncertainty modeling, and biological plausibility. The findings highlight that existing theoretical frameworks provide strong foundations but remain limited in explaining the behavior of high-dimensional, non-convex, and black-box models common in deep learning. The review contributes an integrated conceptual map that clarifies how different theories support robust system design and identifies gaps that future research must address, including scalability of theoretical guarantees, unified frameworks for hybrid systems, and deeper mathematical understanding of modern neural architectures. Overall, the study offers a coherent synthesis that strengthens theoretical grounding and guides future advancements in the construction of reliable and intelligent computing systems.