Retrieval-Augmented Generation (RAG) represents a growing research direction in the advancement of large language models (LLMs) by incorporating external information sources into the response generation process. As LLM-based systems are increasingly deployed in information-sensitive domains such as healthcare, education, and law, the demand for responses that are not only fluent but also verifiable and context-aware has become more pronounced. This study conducts a systematic literature review (SLR) of 100 recent publications to examine methodological approaches, application domains, technical challenges, and research contributions related to RAG. The review draws on studies indexed in major academic databases, including IEEE, ACM, and Springer, and applies structured inclusion and exclusion criteria to ensure analytical rigor. The findings reveal a strong emphasis on architectural optimization, particularly in the interaction between retrieval and generation components, alongside widespread adoption in domain-specific contexts. Persistent challenges identified across the literature include limitations in retriever effectiveness, system integration complexity, and the absence of standardized evaluation benchmarks. Overall, this review provides a structured synthesis of current RAG research and highlights directions for future investigation and practical deployment..