CHATTERJEE, Krishnendu, Petr NOVOTNÝ, Guillermo A. PÉREZ, Jean-Francois RASKIN a Djordje ŽIKELIĆ. Optimizing Expectation with Guarantees in POMDPs. Online. In Satinder P. Singh and Shaul Markovitch. Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence (AAAI). AAAI Press, 2017, s. 3725--3732.
Další formáty:   BibTeX LaTeX RIS
Základní údaje
Originální název Optimizing Expectation with Guarantees in POMDPs
Autoři CHATTERJEE, Krishnendu, Petr NOVOTNÝ, Guillermo A. PÉREZ, Jean-Francois RASKIN a Djordje ŽIKELIĆ.
Vydání Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence (AAAI), od s. 3725--3732, 2017.
Nakladatel AAAI Press
Další údaje
Typ výsledku Stať ve sborníku
Utajení není předmětem státního či obchodního tajemství
Forma vydání elektronická verze "online"
WWW URL
Klíčová slova anglicky Partially-observable Markov decision processes; Discounted payoff; Probabilistic planning; Verification
Příznaky Mezinárodní význam, Recenzováno
Změnil Změnil: doc. RNDr. Petr Novotný, Ph.D., učo 172743. Změněno: 26. 9. 2019 09:44.
Anotace
A standard objective in partially-observable Markov decision processes (POMDPs) is to find a policy that maximizes the expected discounted-sum payoff. However, such policies may still permit unlikely but highly undesirable outcomes, which is problematic especially in safety-critical applications. Recently, there has been a surge of interest in POMDPs where the goal is to maximize the probability to ensure that the payoff is at least a given threshold, but these approaches do not consider any optimization beyond satisfying this threshold constraint. In this work we go beyond both the “expectation” and “threshold” approaches and consider a “guaranteed payoff optimization (GPO)” problem for POMDPs, where we are given a threshold t and the objective is to find a policy σ such that a) each possible outcome of σ yields a discounted-sum payoff of at least t, and b) the expected discounted-sum payoff of σ is optimal (or near-optimal) among all policies satisfying a). We present a practical approach to tackle the GPO problem and evaluate it on standard POMDP benchmarks.
VytisknoutZobrazeno: 1. 5. 2024 19:36