Claim Missing Document
Check
Articles

Found 4 Documents
Search

Balancing Innovation and Privacy: A Red Teaming Approach to Evaluating Phone-Based Large Language Models under AI Privacy Regulations Mangesh Pujari; Anil Kumar Pakina; Anshul Goel
International Journal Science and Technology Vol. 2 No. 3 (2023): November: International Journal Science and Technology
Publisher : Asosiasi Dosen Muda Indonesia

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.56127/ijst.v2i3.1956

Abstract

The rapid deployment of large language models (LLMs) on mobile devices has introduced significant privacy concerns, particularly regarding data collection, user profiling, and compliance with evolving AI regulations such as the GDPR and the AI Act. While these on-device LLMs promise improved latency and user experience, their potential to inadvertently leak sensitive information remains understudied. This paper proposes a red teaming framework to systematically assess the privacy risks of phone-based LLMs, simulating adversarial attacks to identify vulnerabilities in model behavior, data storage, and inference processes. We evaluate popular mobile LLMs under scenarios such as prompt injection, side-channel exploitation, and unintended memorization, measuring their compliance with strict privacy-by-design principles. Our findings reveal critical gaps in current safeguards, including susceptibility to context-aware deanonymization and insufficient data minimization. We further discuss regulatory implications, advocating for adaptive red teaming as a mandatory evaluation step in AI governance. By integrating adversarial testing into the development lifecycle, stakeholders can preemptively align phone-based AI systems with legal and ethical privacy standards while maintaining functional utility.
Enhancing Cybersecurity in Edge AI through Model Distillation and Quantization: A Robust and Efficient Approach Mangesh Pujari; Anshul Goel; Ashwin Sharma
International Journal Science and Technology Vol. 1 No. 3 (2022): November: International Journal Science and Technology
Publisher : Asosiasi Dosen Muda Indonesia

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.56127/ijst.v1i3.1957

Abstract

The rapid proliferation of Edge AI has introduced significant cybersecurity challenges, including adversarial attacks, model theft, and data privacy concerns. Traditional deep learning models deployed on edge devices often suffer from high computational complexity and memory requirements, making them vulnerable to exploitation. This paper explores the integration of model distillation and quantization techniques to enhance the security and efficiency of Edge AI systems. Model distillation reduces model complexity by transferring knowledge from a large, cumbersome model (teacher) to a compact, efficient one (student), thereby improving resilience against adversarial manipulations. Quantization further optimizes the student model by reducing bit precision, minimizing attack surfaces while maintaining performance. We present a comprehensive analysis of how these techniques mitigate cybersecurity threats such as model inversion, membership inference, and evasion attacks. Additionally, we evaluate trade-offs between model accuracy, latency, and robustness in resource-constrained edge environments. Experimental results on benchmark datasets demonstrate that distilled and quantized models achieve comparable accuracy to their full-precision counterparts while significantly reducing vulnerability to cyber threats. Our findings highlight the potential of distillation and quantization as key enablers for secure, lightweight, and high-performance Edge AI deployments.
Efficient TinyML Architectures for On-Device Small Language Models: Privacy-Preserving Inference at the Edge Mangesh Pujari; Anshul Goel; Anil Kumar Pakina
International Journal Science and Technology Vol. 3 No. 3 (2024): November: International Journal Science and Technology
Publisher : Asosiasi Dosen Muda Indonesia

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.56127/ijst.v3i3.1958

Abstract

Deploying small language models (SLMs) on ultra-low-power edge devices requires careful optimization to meet strict memory, latency, and energy constraints while preserving privacy. This paper presents a systematic approach to adapting SLMs for Tiny ML, focusing on model compression, hardware-aware quantization, and lightweight privacy mechanisms. We introduce a sparse ternary quantization technique that reduces model size by 5.8× with minimal accuracy loss and an efficient federated fine-tuning method for edge deployment. To address privacy concerns, we implement on-device differential noise injection during text preprocessing, adding negligible computational overhead. Evaluations on constrained devices (Cortex-M7 and ESP32) show our optimized models achieve 92% of the accuracy of full-precision baselines while operating within 256KB RAM and reducing inference latency by 4.3×. The proposed techniques enable new applications for SLMs in always-on edge scenarios where both efficiency and data protection are critical.
Neuro- Symbolic Compliance Architectures: Real-Time Detection of Evolving Financial Crimes Using Hybrid AI Anil Kumar Pakina; Mangesh Pujari
International Journal Science and Technology Vol. 3 No. 3 (2024): November: International Journal Science and Technology
Publisher : Asosiasi Dosen Muda Indonesia

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.56127/ijst.v4i1.1961

Abstract

This paper proposes NeuroSym-AML, a new neuro-symbolic AI framework explicitly designed for the real-time detection of evolving financial crimes with a special focus on cross-border transactions. By combining Graph Neural Networks (GNNs) with interpretable rule-based reasoning, our system dynamically adapts to emerging money laundering patterns while ensuring strict compliance with FATF/OFAC regulations. In contrast to static rule-based systems, NeuroSym-AML shows better performance-an 83.6% detection accuracy to identify financial criminals, which demonstrated a 31% higher uplift compared with conventional systems-produced by utilizing datasets from 14 million SWIFT transactions. Furthermore, it is continuously learning new criminal typologies, providing decision trails that are available to regulatory audit in real-time. Key innovations include: (1) the continuous self-updating of detection heuristics, (2) automatic natural language processing of the latest regulatory updates, and (3) adversarial robustness against evasion techniques. This hybrid architecture bridges the scalability of machine learning with interpretability of symbolic AI, which can address crucial gaps for financial crime prevention, therefore delivering a solution for satisfying both adaptive fraud detection and transparency in decision-making in high-stakes financial environments.