Decision Tree Insights Analytics (DTIA): An Explainable AI Framework for Severe Accident Analysis
Time: Fri 2026-01-16 10.00
Location: FB51, Roslagstullsbacken 21, Stockholm
Language: English
Subject area: Energy Technology Nuclear Engineering
Doctoral student: Karim Hossny , Kärnvetenskap och kärnteknik
Opponent: Afaque Shams, King Fahd University of Petroleum and Minerals (KFUPM)
Supervisor: Walter Villanueva, Kärnvetenskap och kärnteknik, Nuclear Futures Institute, Bangor University, Bangor, UK; Chong Qi, Kärnvetenskap och kärnteknik
QC 2025-12-12
Abstract
In nuclear reactor safety analysis, we label an accident as severe once a partial core meltdown and material relocation begin. Researchers use simulation tools such as ANSYS and MELCOR to study these events safely, producing vast and complex datasets. In this work, we applied machine learning explainability and interpretability to extract insights from severe accident simulations for the Nordic boiling water reactor (BWR) through five iterative studies. First, we examined the explainability of the decision tree classification algorithm to distinguish between accident types using time-wise pressure vessel external temperature. Second, we generalised the model to create a more statistically robust and generic framework, introducing the open-source Decision Tree Insights Analytics (DTIA) framework (https://github.com/KHossny/DTIA), which combines explainability, interpretability, and statistical robustness. Third, we applied DTIA to high-dimensional MELCOR COR package data for a station blackout combined with a loss-of-coolant accident (SBO + LOCA) in a Nordic BWR, revealing new findings. Fourth, we used DTIA to compare structural variables of the reactor pressure vessel lower head under SBO and SBO + LOCA conditions. Finally, we coupled DTIA with K-Means clustering to address its need for labelled data, uncovering previously overlooked events such as canister melting. We concluded that the patterns identified by machine learning in mapping inputs to outputs can uncover insights that were previously overlooked, particularly in high-dimensional and complex datasets.