Green, Christopher W
Unknown Affiliation

Published : 2 Documents Claim Missing Document
Claim Missing Document
Check
Articles

Found 2 Documents
Search
Journal : Cybersecurity and Innovative Technology Journal

Considerations for the Safety Analysis of AI-Enable Systems Green, Christopher W
Cybersecurity and Innovative Technology Journal Vol 3, No 2 (2025)
Publisher : Gemilang Maju Publikasi Ilmiah (GMPI)

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.53889/citj.v3i2.670

Abstract

This study explored the applicability of hazard analysis techniques to Artificial Intelligence/Machine Learning AI-enabled systems, a growing area of concern in safety-critical domains. The study evaluates 127 hazard analysis techniques described in the System Safety Society’s System Safety Analysis Handbook (1997) for their relevance to the unique challenges posed by AI-enabled systems. A qualitative criteria-based assessment framework was employed to systematically analyze each technique against key AI-specific considerations, including complexity management, human-AI interaction, dynamic and adaptive behavior, software-centric focus, probabilistic and uncertainty handling, and iterative development compatibility. The evaluation process involved defining criteria to address AI/ML systems' distinctive characteristics, assessing each method's applicability, and ranking techniques based on their alignment with AI-related challenges. Findings indicate that Fault Tree Analysis (FTA) and Human Reliability Analysis (HRA) are highly relevant for performing safety on AI-enabled systems. Other techniques, such as What-If Analysis, require adaptation to address emergent behaviors. This study provides a framework for selecting and tailoring hazard analysis methods for AI-enabled systems, contributing to developing robust safety assurance practices in an increasingly intelligent and autonomous era.
System Safety Preliminary Hazard Analysis (PHA) Using Generative Artificial Intelligence Green, Christopher W
Cybersecurity and Innovative Technology Journal Vol 3, No 2 (2025)
Publisher : Gemilang Maju Publikasi Ilmiah (GMPI)

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.53889/citj.v3i2.671

Abstract

This study investigated the capability of ChatGPT, an AI-powered generative language model, to perform hazard analysis for complex systems using the ACME Missile System as a case study. Hazard analyses generated by ChatGPT were compared to those detailed in Ericson, Clifton's 2005 publication, Hazard Analysis Techniques for System Safety, focusing on adherence to MIL-STD-882E methodologies. The research addresses general questions regarding the strengths and limitations of ChatGPT in identifying hazards, assessing risks, and proposing mitigation strategies. Through a structured evaluation, the study examines the completeness, accuracy, and alignment of ChatGPT-generated analyses with traditional techniques, identifying areas of strength, such as efficiency and innovative mitigation suggestions, alongside gaps in contextual understanding and methodological consistency. Findings highlight the potential of ChatGPT as a supplementary tool for initial hazard identification, emphasizing the importance of expert validation to ensure reliability in safety-critical applications. This research contributes to understanding AI’s role in system safety engineering and integration into existing hazard analysis frameworks.