Accurate weather prediction is crucial to support various sectors such as agriculture, transportation, and disaster mitigation. Artificial Neural Networks have been proven to improve the accuracy of weather forecasts through their ability to capture complex nonlinear patterns in atmospheric data. However, the complexity of these artificial neural networks architectures often results in decisions that are non-transparent and difficult for end users to understand. To address this issue, this study examines the effectiveness of the Local Interpretable Model-agnostic Explanation (LIME) method in providing local explanations for weather predictions generated by the artificial neural networks. The study uses historical meteorological data and evaluates the interpretability of prediction results for several key weather variables. Experimental results show that LIME is capable of identifying the most influential features affecting the model's decisions, as well as providing human-understandable insights into the prediction logic. These findings reinforce the importance of integrating explainability methods into artificial neural network-based weather prediction systems to enhance user trust and support more informed decision-making.
Copyrights © 2025