Claim Missing Document
Check
Articles

Found 1 Documents
Search

Statistical Data-Driven Decision-Making Considering Bias, Fairness, and Transparency in AI Vasista, T. G.
Internet of Things and Artificial Intelligence Journal Vol. 5 No. 2 (2025): Volume 5 Issue 2, 2025 [May]
Publisher : Association for Scientific Computing, Electronics, and Engineering (ASCEE)

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.31763/iota.v5i2.905

Abstract

Bias, fairness, and transparency are critical issues in Artificial Intelligence (AI). These problems can arise from sources such as biased training data, algorithmic bias, and reinforcement learning bias. Bias may lead to unintended consequences while attempting to correct bias. The use of the black-box model, along with proprietary and confidentiality constraints, can further obscure decision-making processes. Regulatory challenges complicate the governance of AI systems. Unfairness can arise when the algorithm uses inappropriate features in AI-based decision-making. Lack of transparency in AI-based computation leads to reduced trust, accountability issues, and difficulty in understanding or challenging automated decisions. Addressing bias, fairness, and transparency in AI is crucial to ensure ethical, responsible, and inclusive technology. Governments, organizations, and researchers must work together to create AI systems that serve humanity without reinforcing discrimination. Without addressing these problems, AI will have to risk inequalities and lose public trust. For example, “if you tell an AI image tool to create a man writing with his LEFT hand, the AI will create a man writing with his right hand” India’s PM Modiji pointed it out in a Paris speech. Unfairness can arise when the algorithm uses inappropriate features or a biased training data set to make a decision.