Network Intrusion Detection Systems An Evaluation of Black-Box Explainable AI Frameworks (E-XAI)
Main Article Content
Abstract
In response to the growing complexity and frequency of network attacks, for identifying and alleviating cyber threats, disturbance detection systems (AI) (AI) (AI) have become the necessary systems. However, the opaque nature of many high -performance AI models, often referred to as "black" models, is the main challenge in terms of interpretability and trust. This study represents a comprehensive framework for evaluation of AI (XAI) techniques with an explanation of a black cabinet in connection with the detection of network disruption. The proposed E-XAI combines global and local interpretability tools such as Shap (Shaple Aditive Explanation) and Lime (Local Interpretable Explanations of Model-Agnostic) to detectize the process of deciding different black box ID models. The evaluation is carried out using three widely recognized benchmark data sets: SIMARGL 2021, NSL-KDD, and CIC IDS 2017. The framework not only measures model performance but also examines the transparency and reliability of the decisions made by the models, providing actionable insights for security analysts. Among the tested models, a Voting Classifier that combines boosted decision trees and bagged random forests emerged as the best performer. This ensemble model achieved 97.9% accuracy on the CIC IDS 2017 dataset, 99.4% on NSL-KDD, and a perfect 100% on SIMARGL 2021. These results highlight the framework's ability to maintain high detection accuracy while offering interpretable and trustworthy outputs through integrated XAI techniques. Overall, this work emphasizes the importance of incorporating explainability into high-performing IDS models, ensuring that security analysts can understand and trust the decisions made by AI systems. The E-XAI framework effectively bridges the gap between performance and interpretability, making it a valuable tool in advancing the reliability and usability of modern network security solutions.