Dahil, Irlon
Unknown Affiliation

Published : 1 Documents Claim Missing Document
Claim Missing Document
Check
Articles

Found 1 Documents
Search
Journal : Global Science: Journal of Information Technology and Computer Science

Toward Explainable AI for Cybersecurity: A NIST-Based Knowledge Graph for Transparent Semantic Reasoning Pratama, Firman; Dahil, Irlon; Dien, Marion Erwin; Lase, Dewantoro
Global Science: Journal of Information Technology and Computer Science Vol. 2 No. 1 (2026): March: Global Science: Journal of Information Technology and Computer Science
Publisher : International Forum of Researchers and Lecturers

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.70062/globalscience.v2i1.186

Abstract

Explainable artificial intelligence (XAI) has become a critical requirement in cybersecurity due to the high-stakes nature of security decision-making and the limitations of black-box learning models. This study investigates the construction of an explainable cybersecurity knowledge representation by leveraging standardized terminology from the NIST cybersecurity glossary. The primary problem addressed is the lack of transparent and semantically grounded reasoning mechanisms in existing AI-driven cybersecurity systems, which limits trust, accountability, and analyst adoption. To address this challenge, we propose a NIST-based semantic knowledge graph that embeds explainability directly into its ontology structure and reasoning process. The proposed framework systematically extracts definitional entities and relations from NIST glossary entries to construct a domain ontology and a multi-relational knowledge graph. A rule-based semantic relation extraction method is employed to ensure faithful, interpretable, and reproducible reasoning paths. The resulting knowledge graph contains over 3,000 cybersecurity concepts and approximately 27,000 semantic relations, covering hierarchical, associative, dependency, and mitigation semantics. Experimental evaluation demonstrates that the proposed approach achieves a high level of explainability, with 92.4% of reasoning outcomes being fully traceable and only 1.4% classified as non-traceable. Most explainable reasoning paths are limited to two or three hops, indicating an effective balance between inferential depth and human interpretability. Structural analysis further confirms the presence of meaningful hub concepts that support multi-hop semantic inference. These results confirm that ontology-driven, standard-based knowledge graphs provide a robust foundation for explainable cybersecurity intelligence. The study concludes that explainability-by-design, grounded in authoritative standards, offers a viable and trustworthy alternative to opaque AI models for cybersecurity applications.