Claim Missing Document
Check
Articles

Found 4 Documents
Search

Adversarial AI: Threats, Defenses, and the Role of Explainability in Building Trustworthy Systems Deepak Kejriwal; Pujari, Tejaskumar Dattatray
International Journal Science and Technology Vol. 2 No. 2 (2023): July: International Journal Science and Technology
Publisher : Asosiasi Dosen Muda Indonesia

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.56127/ijst.v2i2.1955

Abstract

Artificial Intelligence has made possible the latest revolutions in the industry. Nevertheless, adversarial AI turns out to be a serious challenge because of its tendency to exploit the vulnerabilities of machine learning models, breach their security, and eventually lead them to fail, mostly unless very few. Adversarial attacks can be evasion and poisoning, model inversion, and so forth; they indeed say how fragile an AI system is and also suggest a proper immediate call for solid defensive structures. Several adversarial defense mechanisms have been proposed―from adversarial training to defensive distillation and certified defenses―yet they remain vulnerable to high-level attacks. This included the emergence of explainable artificial intelligence (XAI) as one of the significant components in AI security, whereby capturing interpretability and transparency can lead to better threat detection and user trust. This work encompasses a literature review of adversarial AIs, current developments in adversarial defenses, and the role played by XAI in reducing threats from such adversarial systems. In effect, the paper presents an integrated framework with techniques of explainability for the building of resilient, transparent, and trustworthy AI systems.
AI-Driven Disinformation Campaigns: Detecting Synthetic Propaganda in Encrypted Messaging via Graph Neural Networks Anil Kumar Pakina; Ashwin Sharma; Deepak Kejriwal
International Journal Science and Technology Vol. 4 No. 1 (2025): March: International Journal Science and Technology
Publisher : Asosiasi Dosen Muda Indonesia

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.56127/ijst.v4i1.1960

Abstract

The rapid rise of generative AI has fueled more sophisticated disinformation campaigns, particularly on encrypted messaging platforms like WhatsApp, Signal, and Telegram. While these platforms protect user privacy through end-to-end encryption, they pose significant challenges to traditional content moderation. Adversaries exploit this privacy to disseminate undetectable synthetic propaganda, influencing public opinion and destabilizing democratic processes without leaving a trace. This research proposes a privacy-preserving detection framework using Graph Neural Networks (GNNs) that focuses on non-content-based signals—such as user interactions, message propagation patterns, temporal behavior, and metadata. GNNs effectively capture relational and structural patterns in encrypted environments, allowing for the detection of coordinated inauthentic behavior without breaching user privacy. Experiments on a large-scale simulated dataset of encrypted messaging scenarios showed that the GNN-based framework achieved 94.2% accuracy and a 92.8% F1-score, outperforming traditional methods like random forests and LSTMs. It was particularly effective in identifying stealthy, low-frequency disinformation campaigns typically missed by conventional anomaly detectors. Positioned at the intersection of AI security, privacy, and disinformation detection, this study introduces a scalable and ethical solution for safeguarding digital spaces. It also initiates dialogue on the legal and ethical implications of behavioral surveillance in encrypted platforms and aligns with broader conversations on responsible AI, digital rights, and democratic resilience.
Ethical and Responsible AI in the Age of Adversarial Diffusion Models: Challenges, Risks, and Mitigation Strategies Tejaskumar Pujari; Anshul Goel; Deepak Kejriwal
International Journal Science and Technology Vol. 1 No. 3 (2022): November: International Journal Science and Technology
Publisher : Asosiasi Dosen Muda Indonesia

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.56127/ijst.v1i3.1963

Abstract

The rapid pace of diffusion models in generative AI has completely restructured many fields, particularly with respect to image synthesis, video generation, and creative data enhancement. However, promising developments remain tinged with ethical questions in view of diffusion-based model dual-use. By misusing these models, purveyors could think up deepfaked videos, unpredictable forms of misinformation, instead outing cyber warfare-related attacks over the Internet, therefore aggravating societal vulnerabilities. This paper explores and analyzes these potential ethical risks and adversarial threats of diffusion-based artificial intelligence technologies. We lay out the basis for good AI-the notion of fair, accountable, transparent, and robust (FATR) systems-discussing efforts underway to mitigate these ethical risks through watermarking, model alignment, and regulatory mechanisms. Thus, from the dialogue with ethical viewpoints, also touching upon cybersecurity, military policy, or governance, we present a conceptual model to encapsulate probable ethical considerations in the development and deployment of diffusion models. Human-centered values need to be advanced by a proactive convergent bonding among researchers, decision-makers, and civil society players during the strengthening of a tributary of generative AI's power.
Adversarial AI in Social Engineering Attacks: Large- Scale Detection and Automated Counter measures Anil Kumar Pakina; Deepak Kejriwal; Tejaskumar Dattatray Pujari
International Journal Science and Technology Vol. 4 No. 1 (2025): March: International Journal Science and Technology
Publisher : Asosiasi Dosen Muda Indonesia

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.56127/ijst.v4i1.1964

Abstract

Social engineering attacks using AI-generated deepfake information leverage rare cybersecurity threat hunting. Conventional phishing detection and fraud prevention systems are failing to catch detection errors due to AI-generated social engineering in email, voice, and video content. To mitigate the increased risk of AI-driven social engineering attacks, a new multi-modal AI defense framework, incorporating Transfer Learning through pre-trained language models, deep fake sound analysis, and behavior-analysis systems capable of pinpointing AI generated social engineering attack, is presented. Benefiting from the utilization of state-of-the-art deepfake voice recognition systems and behavior anomaly detector system (BADS) base for cash withdrawals, the discoverers show that the defense mechanism achieves unprecedented detection accuracy with the least incidence of false positives. This brings about the necessity for fraud prevention augmenting AI measures and provision of automated protection mitigating adversarial social engineering within the enterprise security and financial transaction systems.