This article examines the evolving discourse on granting legal personhood to Artificial Intelligence (AI) by analyzing jurisprudential foundations, global regulatory frameworks, and emerging challenges in liability attribution. As AI systems acquire higher autonomy, opacity, and decision-making independence, traditional human-centered legal structures struggle to assign responsibility for AI-generated harms. Through a qualitative methodological approach, involving library research and content analysis, this study evaluates whether limited or functional legal personhood can serve as a viable solution to accountability gaps created by advanced AI systems. The discussion explores key themes including AI autonomy, black-box decision processes, digital identities in virtual environments, metaverse avatars, and the boundaries of existing tort and contract law. Comparative insights from the European Union, the United States, and India highlight significant divergences in regulatory approaches, particularly regarding “electronic personhood,” strict liability models, and AI-specific safeguards. Findings indicate that while full personhood is premature, a hybrid framework—combining functional personhood, risk-based regulation, and AI-focused accountability mechanisms—could enhance legal clarity, promote responsible innovation, and strengthen public trust. This study contributes to the ongoing global effort to conceptualize AI legal personhood within modern socio-digital ecosystems.
Copyrights © 2025