Rapid development of artificial intelligence technology has raised concerns regarding ethical risks, governance, and the need for adequate regulation. This study aims to analyze the dynamics of public opinion through media coverage of AI risks and regulation. Data were obtained from five major international media outlets (Reuters, Bloomberg, The Guardian, CNBC, and The New York Times) between 2022 and 2025. The analysis process was carried out in several stages: news article extraction, text cleaning, sentiment classification, and trend and distribution visualization. Two approaches were used for sentiment analysis: a rule-based lexical model (VADER) and a contextual transformer model (Multilingual BERT from nlptown). Classification results show that VADER tends to assign neutral labels, while BERT is more sensitive to positive or negative nuances. Correlations between models indicate general trends, but differences emerge during specific periods—particularly during periods of intense coverage of AI policy formulation or ethical incidents. Temporal visualizations show spikes in negative sentiment during the enactment of AI regulations in several countries. This study concludes that the multi-model approach is capable of capturing a broader spectrum of sentiment. Limitations include limited media coverage, potential data bias, and the model's limited ability to understand domain-specific contexts. Recommendations for further study include expanding data sources, using models specifically trained in the AI policy domain, and integrating with entity analysis to uncover dominant actors in public discourse.
Copyrights © 2025