The proliferation of misinformation in political domains, especially across multilingual platforms, presents a major challenge to maintaining public information integrity. Existing models often fail to effectively verify claims when the evidence spans multiple languages and lacks a structured format. To address this issue, this study proposes a novel architecture called Dual-integrated Graph for Multilingual Fact Verification (DiG-MFV), which combines semantic representations from multilingual language models (i.e., mBERT, XLM-R, and LaBSE) with two graph-based components: an evidence graph and a semantic fusion graph. These components are processed through a dual-path architecture that integrates the outputs from a text encoder and a graph encoder, enabling deeper semantic alignment and cross-evidence reasoning. The PolitiFact dataset was used as the source of claims and evidence. The model was evaluated by using a data split of 70% for training, 20% for validation, and 10% for testing. The training process employed the AdamW optimizer, cross-entropy loss, and regularization techniques, including dropout and early stopping based on the F1-score. The evaluation results show that DiG-MFV with LaBSE achieved an accuracy of 85.80% and an F1-score of 85.70%, outperforming the mBERT and XLM-R variants, and proved to be more effective than the DGMFP baseline model (76.1% accuracy). The model also demonstrated stable convergence during training, indicating its robustness in cross-lingual political fact verification tasks. These findings encourage further exploration in graph-based multilingual fact verification systems.
Copyrights © 2025