Objective: AI in healthcare faces major challenges in patient data privacy and security, risking ethical issues. Additionally, insufficient technical support and training for healthcare staff may impede effective implementation across facilities. This study aims to (1) analyze the role of Artificial Intelligence (AI) in digital health innovation, (2) identify key challenges related to privacy and data security in AI applications within the healthcare sector, and (3) propose policies that support data privacy and security in AI implementation in the 5.0 era. Methods: A qualitative descriptive approach was used through a literature review from leading sources on AI applications in digital health, with analysis of key themes such as transparency, consent, data security, data minimization, user access, and routine audits. Literature evaluation compares data security practices and regulations in different countries. Results: The findings show that AI applications in digital health face major challenges, particularly in protecting data privacy. Key insights include (1) the need for transparency in data usage, (2) limitations in the current informed consent practices, (3) the need for stronger data security measures, and (4) the lack of regular audits to assess compliance with privacy policies. These factors highlight the need for stricter policies to ensure user data protection. Conclusion: Supporting AI applications in digital health requires thorough privacy and data security policies that address user access, security, transparency, permission, data minimisation, and frequent audits. Robust regulation will safeguard users' rights and privacy while fostering long-term, reliable digital health innovation