Global Science: Journal of Information Technology and Computer Science
Vol. 2 No. 1 (2026): March: Global Science: Journal of Information Technology and Computer Science

Toward Explainable AI for Cybersecurity: A NIST-Based Knowledge Graph for Transparent Semantic Reasoning

Pratama, Firman (Unknown)
Dahil, Irlon (Unknown)
Dien, Marion Erwin (Unknown)
Lase, Dewantoro (Unknown)



Article Info

Publish Date
08 Mar 2026

Abstract

Explainable artificial intelligence (XAI) has become a critical requirement in cybersecurity due to the high-stakes nature of security decision-making and the limitations of black-box learning models. This study investigates the construction of an explainable cybersecurity knowledge representation by leveraging standardized terminology from the NIST cybersecurity glossary. The primary problem addressed is the lack of transparent and semantically grounded reasoning mechanisms in existing AI-driven cybersecurity systems, which limits trust, accountability, and analyst adoption. To address this challenge, we propose a NIST-based semantic knowledge graph that embeds explainability directly into its ontology structure and reasoning process. The proposed framework systematically extracts definitional entities and relations from NIST glossary entries to construct a domain ontology and a multi-relational knowledge graph. A rule-based semantic relation extraction method is employed to ensure faithful, interpretable, and reproducible reasoning paths. The resulting knowledge graph contains over 3,000 cybersecurity concepts and approximately 27,000 semantic relations, covering hierarchical, associative, dependency, and mitigation semantics. Experimental evaluation demonstrates that the proposed approach achieves a high level of explainability, with 92.4% of reasoning outcomes being fully traceable and only 1.4% classified as non-traceable. Most explainable reasoning paths are limited to two or three hops, indicating an effective balance between inferential depth and human interpretability. Structural analysis further confirms the presence of meaningful hub concepts that support multi-hop semantic inference. These results confirm that ontology-driven, standard-based knowledge graphs provide a robust foundation for explainable cybersecurity intelligence. The study concludes that explainability-by-design, grounded in authoritative standards, offers a viable and trustworthy alternative to opaque AI models for cybersecurity applications.

Copyrights © 2026






Journal Info

Abbrev

GlobalScience

Publisher

Subject

Computer Science & IT

Description

Global Science: Journal of Information Technology and Computer Science; This a journal intended for the publication of scientific articles published by International Forum of Researchers and Lecturers This journal contains studies in the fields of Information Technology and Computer Science, both ...