This study integrates a large language model (LLM) consultation service into Indonesia’s national Cyber Security Awareness Survey (“Survei Kesadaran Keamanan Siber”/SKKS) to convert survey benchmarking into immediate, personalized cybersecurity remediation and evaluate its safety, usability, and potential short-term proximal intention shift among Generation Z respondents. Using a two-phase, multi-method design, Phase I conducted a model-centric expert evaluation of LLM-generated recommendations across 20 standardized synthetic SKKS profiles, assessing relevance, accuracy, completeness, clarity, and safety. Phase II implemented a single-session within- subject study (N = 104) that measured post-interaction user experience and pre–post changes in security behavior intentions using an adapted Security Behavior Intentions Scale (SeBIS). Expert results showed consistently high ratings across dimensions (all means > 4.0/5) with no safety veto triggers and strong inter-rater reliability (ICC[2,k] = 0.82–1.00). Users reported a positive experience (means ≈ 3.84–3.96/5), sustained engagement, and a significant increase in SeBIS total score (dz = 0.42), with the largest gains in password-management intentions. Novelty lies in embedding LLM-based, profile-driven consultation within a national-scale awareness survey and validating it through both expert human review and behavioral-intention measurement. Beyond cybersecurity, this work contributes to the broader literature on AI- mediated educational systems in safety-critical domains by demonstrating how adaptive dialogue systems can operationalize assessment-to-action loops and support scalable, human-centered personalization.