The rapid deployment of large language models (LLMs) on mobile devices has introduced significant privacy concerns, particularly regarding data collection, user profiling, and compliance with evolving AI regulations such as the GDPR and the AI Act. While these on-device LLMs promise improved latency and user experience, their potential to inadvertently leak sensitive information remains understudied. This paper proposes a red teaming framework to systematically assess the privacy risks of phone-based LLMs, simulating adversarial attacks to identify vulnerabilities in model behavior, data storage, and inference processes. We evaluate popular mobile LLMs under scenarios such as prompt injection, side-channel exploitation, and unintended memorization, measuring their compliance with strict privacy-by-design principles. Our findings reveal critical gaps in current safeguards, including susceptibility to context-aware deanonymization and insufficient data minimization. We further discuss regulatory implications, advocating for adaptive red teaming as a mandatory evaluation step in AI governance. By integrating adversarial testing into the development lifecycle, stakeholders can preemptively align phone-based AI systems with legal and ethical privacy standards while maintaining functional utility.
Copyrights © 2023