Artificial intelligence (AI) is progressing toward the Agentic AI paradigm, which involves intelligent systems capable of autonomous, proactive, and goal-focused behavior through adaptive interactions with their environment. This article provides a critical review of the development of Agentic AI, examining its technological foundations, application areas, and the associated technical, ethical, and policy challenges. The review employs a narrative approach, examining primary literature from the IEEE, Scopus, and ScienceDirect databases for the period 2019–2025, using keywords such as agentic AI, multi-agent systems, human–AI collaboration, and autonomous decision systems. The findings are organized into a three-layer conceptual framework that links core technologies, such as Reinforcement Learning, Multi-Agent Systems, and Natural Language Processing, with various application domains and cross-cutting challenges. The analysis indicates that despite the significant potential of Agentic AI, gaps remain in areas such as agent interoperability, autonomy assessment metrics, and field implementation limitations. This article proposes a structured research agenda aimed at developing Agentic AI that is more transparent, trustworthy, and aligned with human values.
Copyrights © 2025