The development of AI is now changing many aspects of our lives, especially in the digital world. However, along with these advances, new challenges have arisen, particularly in the form of digital disinformation. Technologies such as deepfake and voice cloning have made it easy to manipulate a person’s image or voice, often for malicious purposes such as online fraud and the spread of fake news. This study focuses on the ethical violations that occur with the use of AI, particularly in the context of these technologies. It also evaluates the effectiveness of Indonesia's legal framework in combating digital disinformation. Using qualitative research methods based on literature studies, the results highlight several ethical concerns. Fundamental principles such as responsibility, honesty, and justice are often overlooked in the development and application of AI, leading to unethical behavior in the digital sphere. Additionally, the study reveals that the legal regulations currently in place in Indonesia are insufficient to adequately protect the public from the potential abuses of AI technology. This research emphasizes the need for stronger legal and ethical standards to address the growing concerns about AI misuse and to safeguard the public from digital manipulation and disinformation.
Copyrights © 2025