This study presents a systematic literature review of Data Quality Management (DQM) in data warehouse environments, aiming to map key dimensions, processes, and architectural/technological enablers, and to identify research gaps. Searches were conducted across Scopus, ScienceDirect, IEEE Xplore, ACM Digital Library, SpringerLink, and Google Scholar (as a complement) for the period 2009–2025, following PRISMA 2020. Of 200 initial records, 133 were excluded during the first screening, 67 underwent further assessment, and 6 studies met the inclusion criteria for in-depth analysis. Thematic synthesis indicates that effective DQM rests on four integrated pillars: (1) standardized quality dimensions and metrics (accuracy, completeness, consistency, timeliness, and traceability), (2) prevention–detection–correction processes embedded along the ETL/ELT pipeline (including consistent SCD policies and handling of late-arriving data), (3) architectural/technological support (automated data tests within CI/CD, catalogs/metadata, data lineage, observability, and data contracts), and (4) governance that clarifies roles and accountability (data owners/stewards) with incident-response procedures. Practically, organizations should start from critical data elements and high-priority consumption paths, translating SLA/SLI into executable rules. Limitations include the small number of included studies and contextual heterogeneity, motivating further work on cross-domain metric standardization, open DQM benchmarks, cost–benefit evaluations of observability/contract enforcement, and the impact of data quality on analytic/AI performance in near real-time settings.