Large Language Models (LLMs), as part of generative artificial intelligence, have brought significant advancements in natural language processing. However, numerous studies indicate that LLMs are not free from bias and potential discrimination, whether in training data or model design. This study aims to analyze the ethical and social responsibility issues in the development and deployment of LLMs, focusing on bias and its implications for social justice. The method employed is a narrative literature review of academic literature, policy reports, and relevant industry documents. Findings reveal that bias in LLMs is systemic and multidimensional, and current governance mechanisms are inadequate. This research recommends transparency, ethical audits, multi-stakeholder engagement, and participatory research approaches as mitigation strategies. Hence, future LLM development must prioritize not only technological efficiency but also justice, accountability, and inclusiveness..
Copyrights © 2025