This research proposes a semantic-enhanced sentiment analysis framework that integrates dependency parsing, graph attention networks, and prior sentiment knowledge to improve classification accuracy in Chinese online health community texts. Comprehensive experiments conducted on 31,718 online health community comments demonstrate the effectiveness of the proposed approach. The BERT-TBGH model achieves 90.77% accuracy, representing substantial improvements of 10.57% and 7.79% over baseline TextCNN and BiLSTM models, respectively. Ablation studies reveal that incorporating sentiment knowledge contributes 1.85% accuracy improvement, while character-level dependency syntactic information adds 1.00%. The dual-channel architecture outperforms single-channel approaches, with TextCNN\BiLSTM showing 0.64% and 3.57% F1-score improvements over individual BiLSTM and TextCNN models. Graph Attention Networks demonstrate superior performance compared to Graph Convolutional Networks for dependency parsing integration, with GAT-based models achieving 0.86% higher accuracy than GCN alternatives.
Copyrights © 2025