D 2025

Explainably Safe Reinforcement Learning

RIEDER, Sabine; Stefan PRANGER; Debraj CHAKRABORTY; Jan KŘETÍNSKÝ; Bettina KÖNIGHOFER et al.

Základní údaje

Originální název

Explainably Safe Reinforcement Learning

Autoři

RIEDER, Sabine ORCID; Stefan PRANGER; Debraj CHAKRABORTY; Jan KŘETÍNSKÝ a Bettina KÖNIGHOFER

Vydání

United States of America, The Thirty-ninth Annual Conference on Neural Information Processing Systems, od s. 1-30, 30 s. 2025

Nakladatel

Neural Information Processing Systems Foundation, Inc.

Další údaje

Jazyk

angličtina

Typ výsledku

Stať ve sborníku

Obor

10201 Computer sciences, information science, bioinformatics

Utajení

není předmětem státního či obchodního tajemství

Forma vydání

elektronická verze "online"

Odkazy

Označené pro přenos do RIV

Ano

Organizační jednotka

Fakulta informatiky

ISSN

Klíčová slova anglicky

Reinforcement Learning; Shielding; Decision Tree; Safe AI; Explainability

Štítky


Anotace

V originále

Trust in a decision-making system requires both safety guarantees and the ability to interpret and understand its behavior. This is particularly important for learned systems, whose decision-making processes are often highly opaque. Shielding is a prominent model-based technique for enforcing safety in reinforcement learning. However, because shields are automatically synthesized using rigorous formal methods, their decisions are often similarly difficult for humans to interpret. Recently, decision trees became customary to represent controllers and policies. However, since shields are inherently non-deterministic, their decision tree representations become too large to be explainable in practice. To address this challenge, we propose a novel approach for explainable safe RL that enhances trust by providing human-interpretable explanations of the shield's decisions. Our method represents the shielding policy as a hierarchy of decision trees, offering top-down, case-based explanations. At design time, we use a world model to analyze the safety risks of executing actions in given states. Based on this risk analysis, we construct both the shield and a high-level decision tree that classifies states into risk categories (safe, critical, dangerous, unsafe), providing an initial explanation of why a given situation may be safety-critical. At runtime, we generate localized decision trees that explain which actions are allowed and why others are deemed unsafe. Altogether, our method facilitates the explainability of the safety aspect in the safe-by-shielding reinforcement learning. Our framework requires no additional information beyond what is already used for shielding, incurs minimal overhead, and can be readily integrated into existing shielded RL pipelines. In our experiments, we compute explanations using decision trees that are several orders of magnitude smaller than the original shield.

Návaznosti

MUNI/I/1757/2021, interní kód MU
Název: MUNI Award in Science and Humanities (Akronym: Křetínský)
Investor: Masarykova univerzita, MUNI Award in Science and Humanities, MASH - MUNI Award in Science and Humanities
101171844, interní kód MU
Název: Intelligence-Oriented Verification&Controller Synthesis
Investor: Evropská unie, Intelligence-Oriented Verification&Controller Synthesis, Evropská rada pro výzkum (ERC)
101212818, interní kód MU
Název: ROBUSTIFYING GENERATIVE AI THROUGH HUMAN-CENTRIC INTEGRATION OF NEURAL AND SYMBOLIC METHODS
Investor: Evropská unie, ROBUSTIFYING GENERATIVE AI THROUGH HUMAN-CENTRIC INTEGRATION OF NEURAL AND SYMBOLIC METHODS, Klastr 4 - Digitalizace, průmysl a vesmír