The adoption of artificial intelligence (AI) in business decision-making has revolutionized operations but introduced critical ethical challenges, particularly in bias and accountability. This study investigates the sources of bias in AI-driven systems and evaluates current accountability frameworks in business contexts. A mixed-methods approach is employed, combining a comprehensive literature review with in-depth interviews with business leaders across technology, finance, and healthcare sectors. The findings reveal that algorithmic and data biases are prevalent, arising from imbalanced training datasets and opaque algorithmic processes. Existing accountability mechanisms are often insufficient, with responsibility dispersed among developers, managers, and regulators. Practical strategies, such as third-party audits and algorithmic transparency initiatives, are emerging but require further refinement. This study emphasizes the need for robust ethical frameworks, including guidelines like Fairness Accountability Transparency Ethics (FATE), to mitigate bias and ensure responsible AI usage. Key recommendations include the adoption of transparent AI models, enhanced regulatory oversight, and targeted training for stakeholders on AI ethics. These insights contribute to the ongoing discourse on ethical AI deployment and provide actionable pathways for businesses aiming to navigate the ethical complexities of AI.