The rapid integration of Artificial Intelligence (AI) into creative industries and recruitment processes has introduced significant benefits in efficiency, automation, and content generation. However, this advancement brings with it serious ethical, legal, and governance challenges, particularly related to transparency, data accountability, and fairness. The ethical use of AI in recruitment has raised concerns about algorithmic discrimination and bias, while in creative sectors, it raises unresolved questions about intellectual property rights and authorship of AI-generated content. This study aims to explore and critically analyze the ethical implications of AI usage in both the creative and recruitment domains, focusing on legal uncertainty, transparency deficits, and governance limitations. The research employs a qualitative method through a literature review strategy, using secondary data from 10 selected peer-reviewed articles published between 2019 and 2025. The study applies content analysis to extract key themes such as algorithmic fairness, explainability, bias mitigation, legal frameworks, and cultural accountability. The findings suggest that AI systems in both sectors often operate in regulatory grey zones, lacking enforceable mechanisms for transparency, data protection, and human oversight. In recruitment, AI tools may amplify historical discrimination patterns, while in creative industries, generative models challenge traditional notions of authorship and intellectual ownership. To mitigate these issues, the study proposes a governance framework grounded in six principles: transparency, fairness, accountability, consent, auditability, and inclusivity. These principles serve as practical guidelines for developers, policymakers, and industry stakeholders to build responsible AI ecosystems that promote innovation while safeguarding human dignity.