2025
Explainably Safe Reinforcement Learning
RIEDER, Sabine; Stefan PRANGER; Debraj CHAKRABORTY; Jan KŘETÍNSKÝ; Bettina KÖNIGHOFER et al.Základní údaje
Originální název
Explainably Safe Reinforcement Learning
Autoři
Vydání
United States of America, The Thirty-ninth Annual Conference on Neural Information Processing Systems, od s. 1-30, 30 s. 2025
Nakladatel
Neural Information Processing Systems Foundation, Inc.
Další údaje
Jazyk
angličtina
Typ výsledku
Stať ve sborníku
Obor
10201 Computer sciences, information science, bioinformatics
Utajení
není předmětem státního či obchodního tajemství
Forma vydání
elektronická verze "online"
Odkazy
Označené pro přenos do RIV
Ano
Organizační jednotka
Fakulta informatiky
ISSN
Klíčová slova anglicky
Reinforcement Learning; Shielding; Decision Tree; Safe AI; Explainability
Změněno: 25. 3. 2026 14:58, prof. Dr. rer. nat. RNDr. Mgr. Bc. Jan Křetínský, Ph.D.
Anotace
V originále
Trust in a decision-making system requires both safety guarantees and the ability to interpret and understand its behavior. This is particularly important for learned systems, whose decision-making processes are often highly opaque. Shielding is a prominent model-based technique for enforcing safety in reinforcement learning. However, because shields are automatically synthesized using rigorous formal methods, their decisions are often similarly difficult for humans to interpret. Recently, decision trees became customary to represent controllers and policies. However, since shields are inherently non-deterministic, their decision tree representations become too large to be explainable in practice. To address this challenge, we propose a novel approach for explainable safe RL that enhances trust by providing human-interpretable explanations of the shield's decisions. Our method represents the shielding policy as a hierarchy of decision trees, offering top-down, case-based explanations. At design time, we use a world model to analyze the safety risks of executing actions in given states. Based on this risk analysis, we construct both the shield and a high-level decision tree that classifies states into risk categories (safe, critical, dangerous, unsafe), providing an initial explanation of why a given situation may be safety-critical. At runtime, we generate localized decision trees that explain which actions are allowed and why others are deemed unsafe. Altogether, our method facilitates the explainability of the safety aspect in the safe-by-shielding reinforcement learning. Our framework requires no additional information beyond what is already used for shielding, incurs minimal overhead, and can be readily integrated into existing shielded RL pipelines. In our experiments, we compute explanations using decision trees that are several orders of magnitude smaller than the original shield.
Návaznosti
| MUNI/I/1757/2021, interní kód MU |
| ||
| 101171844, interní kód MU |
| ||
| 101212818, interní kód MU |
|