The development of conversational artificial intelligence (AI) has not only brought about technological innovations but has also given rise to legal issues. The phenomenon of AI-induced suicide highlights the multifaceted legislative demands within the criminal domain for AI. In-depth research into the issues of suitability concerning suicide victims, AI, and regulatory entities becomes particularly necessary. Through literature analysis and comparative legal analysis, this article aims to provide theoretical support for the legal delineation of liability in the context of AI incitement to suicide. Specifically, this article conducts a thorough investigation and comprehensive analysis of relevant legal literature both in China and internationally. The objective is to clarify the legal positions and real challenges surrounding the issue of AI incitement to suicide. Consequently, this article explores whether AI should be considered a legal subject and how, in different contexts, suicide victims and AI regulatory entities should share corresponding responsibilities. As for the findings, AI should not be regarded as an independent legal subject. Based on the theories of victim self-entrapment risk and omission in criminal law, in various situations, suicide victims or AI regulatory entities should bear corresponding responsibilities for the events of incitement to suicide. By delving into the legal liability issues of AI in incitement to suicide, this article provides a theoretical basis for comprehensive AI legislation in the future, demonstrating theoretical innovation. Furthermore, the exploration of criminal legal regulation contributes to the construction of a more comprehensive and rational legal framework for AI.