Claim Missing Document
Check
Articles

Found 5 Documents
Search

Balancing Innovation and Privacy: A Red Teaming Approach to Evaluating Phone-Based Large Language Models under AI Privacy Regulations Mangesh Pujari; Anil Kumar Pakina; Anshul Goel
International Journal Science and Technology Vol. 2 No. 3 (2023): November: International Journal Science and Technology
Publisher : Asosiasi Dosen Muda Indonesia

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.56127/ijst.v2i3.1956

Abstract

The rapid deployment of large language models (LLMs) on mobile devices has introduced significant privacy concerns, particularly regarding data collection, user profiling, and compliance with evolving AI regulations such as the GDPR and the AI Act. While these on-device LLMs promise improved latency and user experience, their potential to inadvertently leak sensitive information remains understudied. This paper proposes a red teaming framework to systematically assess the privacy risks of phone-based LLMs, simulating adversarial attacks to identify vulnerabilities in model behavior, data storage, and inference processes. We evaluate popular mobile LLMs under scenarios such as prompt injection, side-channel exploitation, and unintended memorization, measuring their compliance with strict privacy-by-design principles. Our findings reveal critical gaps in current safeguards, including susceptibility to context-aware deanonymization and insufficient data minimization. We further discuss regulatory implications, advocating for adaptive red teaming as a mandatory evaluation step in AI governance. By integrating adversarial testing into the development lifecycle, stakeholders can preemptively align phone-based AI systems with legal and ethical privacy standards while maintaining functional utility.
Efficient TinyML Architectures for On-Device Small Language Models: Privacy-Preserving Inference at the Edge Mangesh Pujari; Anshul Goel; Anil Kumar Pakina
International Journal Science and Technology Vol. 3 No. 3 (2024): November: International Journal Science and Technology
Publisher : Asosiasi Dosen Muda Indonesia

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.56127/ijst.v3i3.1958

Abstract

Deploying small language models (SLMs) on ultra-low-power edge devices requires careful optimization to meet strict memory, latency, and energy constraints while preserving privacy. This paper presents a systematic approach to adapting SLMs for Tiny ML, focusing on model compression, hardware-aware quantization, and lightweight privacy mechanisms. We introduce a sparse ternary quantization technique that reduces model size by 5.8× with minimal accuracy loss and an efficient federated fine-tuning method for edge deployment. To address privacy concerns, we implement on-device differential noise injection during text preprocessing, adding negligible computational overhead. Evaluations on constrained devices (Cortex-M7 and ESP32) show our optimized models achieve 92% of the accuracy of full-precision baselines while operating within 256KB RAM and reducing inference latency by 4.3×. The proposed techniques enable new applications for SLMs in always-on edge scenarios where both efficiency and data protection are critical.
AI-Driven Disinformation Campaigns: Detecting Synthetic Propaganda in Encrypted Messaging via Graph Neural Networks Anil Kumar Pakina; Ashwin Sharma; Deepak Kejriwal
International Journal Science and Technology Vol. 4 No. 1 (2025): March: International Journal Science and Technology
Publisher : Asosiasi Dosen Muda Indonesia

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.56127/ijst.v4i1.1960

Abstract

The rapid rise of generative AI has fueled more sophisticated disinformation campaigns, particularly on encrypted messaging platforms like WhatsApp, Signal, and Telegram. While these platforms protect user privacy through end-to-end encryption, they pose significant challenges to traditional content moderation. Adversaries exploit this privacy to disseminate undetectable synthetic propaganda, influencing public opinion and destabilizing democratic processes without leaving a trace. This research proposes a privacy-preserving detection framework using Graph Neural Networks (GNNs) that focuses on non-content-based signals—such as user interactions, message propagation patterns, temporal behavior, and metadata. GNNs effectively capture relational and structural patterns in encrypted environments, allowing for the detection of coordinated inauthentic behavior without breaching user privacy. Experiments on a large-scale simulated dataset of encrypted messaging scenarios showed that the GNN-based framework achieved 94.2% accuracy and a 92.8% F1-score, outperforming traditional methods like random forests and LSTMs. It was particularly effective in identifying stealthy, low-frequency disinformation campaigns typically missed by conventional anomaly detectors. Positioned at the intersection of AI security, privacy, and disinformation detection, this study introduces a scalable and ethical solution for safeguarding digital spaces. It also initiates dialogue on the legal and ethical implications of behavioral surveillance in encrypted platforms and aligns with broader conversations on responsible AI, digital rights, and democratic resilience.
Neuro- Symbolic Compliance Architectures: Real-Time Detection of Evolving Financial Crimes Using Hybrid AI Anil Kumar Pakina; Mangesh Pujari
International Journal Science and Technology Vol. 3 No. 3 (2024): November: International Journal Science and Technology
Publisher : Asosiasi Dosen Muda Indonesia

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.56127/ijst.v4i1.1961

Abstract

This paper proposes NeuroSym-AML, a new neuro-symbolic AI framework explicitly designed for the real-time detection of evolving financial crimes with a special focus on cross-border transactions. By combining Graph Neural Networks (GNNs) with interpretable rule-based reasoning, our system dynamically adapts to emerging money laundering patterns while ensuring strict compliance with FATF/OFAC regulations. In contrast to static rule-based systems, NeuroSym-AML shows better performance-an 83.6% detection accuracy to identify financial criminals, which demonstrated a 31% higher uplift compared with conventional systems-produced by utilizing datasets from 14 million SWIFT transactions. Furthermore, it is continuously learning new criminal typologies, providing decision trails that are available to regulatory audit in real-time. Key innovations include: (1) the continuous self-updating of detection heuristics, (2) automatic natural language processing of the latest regulatory updates, and (3) adversarial robustness against evasion techniques. This hybrid architecture bridges the scalability of machine learning with interpretability of symbolic AI, which can address crucial gaps for financial crime prevention, therefore delivering a solution for satisfying both adaptive fraud detection and transparency in decision-making in high-stakes financial environments.
Adversarial AI in Social Engineering Attacks: Large- Scale Detection and Automated Counter measures Anil Kumar Pakina; Deepak Kejriwal; Tejaskumar Dattatray Pujari
International Journal Science and Technology Vol. 4 No. 1 (2025): March: International Journal Science and Technology
Publisher : Asosiasi Dosen Muda Indonesia

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.56127/ijst.v4i1.1964

Abstract

Social engineering attacks using AI-generated deepfake information leverage rare cybersecurity threat hunting. Conventional phishing detection and fraud prevention systems are failing to catch detection errors due to AI-generated social engineering in email, voice, and video content. To mitigate the increased risk of AI-driven social engineering attacks, a new multi-modal AI defense framework, incorporating Transfer Learning through pre-trained language models, deep fake sound analysis, and behavior-analysis systems capable of pinpointing AI generated social engineering attack, is presented. Benefiting from the utilization of state-of-the-art deepfake voice recognition systems and behavior anomaly detector system (BADS) base for cash withdrawals, the discoverers show that the defense mechanism achieves unprecedented detection accuracy with the least incidence of false positives. This brings about the necessity for fraud prevention augmenting AI measures and provision of automated protection mitigating adversarial social engineering within the enterprise security and financial transaction systems.