Search papers, labs, and topics across Lattice.
This paper investigates the use of Explainable AI (XAI) techniques, specifically SHAP, LIME, and Integrated Gradients, to improve the transparency and interpretability of NLP-based fake news detection models. The study implements classification models and applies these XAI methods to understand feature importance and model decision-making. Results indicate that XAI enhances model transparency without sacrificing detection accuracy, with each method offering unique explanatory benefits and limitations.
XAI can boost trust in fake news detection by revealing which words sway the model, but choosing the right XAI method (SHAP, LIME, or Integrated Gradients) matters for performance and interpretability.
This article examines the application of Explainable Artificial Intelligence (XAI) in NLP based fake news detection and compares selected interpretability methods. The work outlines key aspects of disinformation, neural network architectures, and XAI techniques, with a focus on SHAP, LIME, and Integrated Gradients. In the experimental study, classification models were implemented and interpreted using these methods. The results show that XAI enhances model transparency and interpretability while maintaining high detection accuracy. Each method provides distinct explanatory value: SHAP offers detailed local attributions, LIME provides simple and intuitive explanations, and Integrated Gradients performs efficiently with convolutional models. The study also highlights limitations such as computational cost and sensitivity to parameterization. Overall, the findings demonstrate that integrating XAI with NLP is an effective approach to improving the reliability and trustworthiness of fake news detection systems.