The rapid evolution of Artificial Intelligence (AI) has brought significant changes across various sectors, including healthcare, finance, and criminal justice, presenting both remarkable opportunities and complex ethical challenges. As AI becomes increasingly embedded in decision-making processes, concerns about individual rights, social equity, and public trust are growing, especially in high-stakes contexts. These ethical implications underscore the critical need for robust frameworks that emphasize AI transparency, accountability, and fairness to mitigate risks such as bias and ensure responsible usage. Despite the increased focus on ethical AI practices, there remains a considerable gap in understanding how these frameworks impact societal perceptions and behaviors toward AI. This study seeks to address this gap by investigating the effects of ethical AI practices—specifically transparency, accountability, and fairness—on public perceptions and behaviors. The study employs a quantitative approach, using purposive sampling to select a sample of AI-knowledgeable participants and analyzing the data with Partial Least Squares Structural Equation Modeling (PLS-SEM). This methodological approach allows for a detailed exploration of the relationships between ethical AI practices and societal impacts. Additionally, the study examines the mediated pathways through which these ethical practices influence AI’s societal and behavioral impacts, hypothesizing that transparency and accountability foster trust and positive engagement. By developing a framework that aligns ethical AI practices with societal values, this study aims to advance the broader goals of societal trust, public acceptance, and sustainable social integration of AI technologies. These insights contribute to the growing body of knowledge on responsible AI deployment, supporting ethical alignment in diverse AI applications and promoting trustworthiness in AI-driven systems