The rapid diffusion of generative artificial intelligence (AI) writing tools such as ChatGPT, Grammarly, and related systems has intensified debates in higher education about their pedagogical value, risks, and long-term consequences for academic writing. This study reports a systematic review of empirical and review studies published between January 2023 and October 2025 that examine how AI writing tools and closely related AI applications influence writing and writing-related outcomes in higher-education settings. Following PRISMA 2020 guidelines, database searches identified 1,032 records; after deduplication and screening of titles and abstracts, 213 full texts were assessed and 102 studies met the inclusion criteria. From these, a focal corpus of 40 articles that most directly addressed generative AI tools, automated written feedback, or academic writing in higher education was subjected to in-depth coding and thematic synthesis. Across the writing-focused primary studies, AI-based feedback and generative tools were frequently associated with improvements in surface-level aspects of writing such as grammatical accuracy, cohesion, and fluency, while the broader corpus highlighted perceived benefits for efficiency, personalization, and formative support. At the same time, many studies reported concerns about overreliance on AI, reduced metacognitive engagement, academic integrity, and gaps in institutional governance. Owing to substantial heterogeneity in tools, designs, and outcome measures, the review does not compute new pooled effect sizes and instead offers a narrative, thematically structured synthesis. Key limitations include the short time window, the predominance of review-type studies, and concentration in specific disciplines and regions. Overall, the findings suggest that generative AI writing tools function most productively as supports within guided, reflective pedagogy rather than as stand-alone replacements for human writing instruction.