The rapid expansion of Generative AI adoption in higher education has not been matched by sufficient understanding of how security, privacy, and trust shape its use, leaving a research gap regarding how risks and trust are formed in academic settings. This study examines the effects of security, privacy, and trust on students’ behavioral intention and actual use of Generative AI by extending the UTAUT framework through the integration of these constructs. A quantitative survey was administered to 450 students at Bina Nusantara University using purposive convenience sampling, and the data were analyzed with PLS-SEM (SmartPLS 3.0). The results show that Performance Expectancy (β = 0.247; t = 4.355; p < 0.001), Effort Expectancy (β = 0.213; t = 3.597; p < 0.001), and Social Influence (β = 0.186; t = 3.564; p < 0.001) significantly shape Behavioral Intention, while Behavioral Intention strongly predicts Use Behavior (β = 0.368; t = 6.700; p < 0.001). Facilitating Conditions also exert a direct influence on Use Behavior (β = 0.228; t = 5.511; p < 0.001). Among the risk-related variables, Security affects Behavioral Intention (β = 0.150; t = 2.981; p = 0.003) but not actual behavior, and Privacy is not significant for either dependent variable (p > 0.05). Trust consistently predicts both intention and behavior (β = 0.108; p = 0.010; β = 0.148; p = 0.002). These findings extend UTAUT by underscoring the mediating role of trust in Generative AI adoption and offer policy implications for improving data security transparency and institutional trust-building strategies.