BRÁZDIL, Tomáš, Václav BROŽEK, Krishnendu CHATTERJEE, Vojtěch FOREJT and Antonín KUČERA. Markov Decision Processes with Multiple Long-Run Average Objectives. Logical Methods in Computer Science. Technical University of Braunschweig, 2014, vol. 10, No 1, p. 1-29. ISSN 1860-5974. Available from: https://dx.doi.org/10.2168/LMCS-10(1:13)2014.
Other formats:   BibTeX LaTeX RIS
Basic information
Original name Markov Decision Processes with Multiple Long-Run Average Objectives
Authors BRÁZDIL, Tomáš (203 Czech Republic, belonging to the institution), Václav BROŽEK (203 Czech Republic, belonging to the institution), Krishnendu CHATTERJEE (356 India), Vojtěch FOREJT (203 Czech Republic, belonging to the institution) and Antonín KUČERA (203 Czech Republic, guarantor, belonging to the institution).
Edition Logical Methods in Computer Science, Technical University of Braunschweig, 2014, 1860-5974.
Other information
Original language English
Type of outcome Article in a journal
Field of Study 10201 Computer sciences, information science, bioinformatics
Country of publisher Germany
Confidentiality degree is not subject to a state or trade secret
WWW URL
Impact factor Impact factor: 0.357
RIV identification code RIV/00216224:14330/14:00074494
Organization unit Faculty of Informatics
Doi http://dx.doi.org/10.2168/LMCS-10(1:13)2014
UT WoS 000333744700001
Keywords in English Markov decision processes; mean-payoff reward; multi-objective optimisation; formal verification
Tags formela-journal
Tags International impact, Reviewed
Changed by Changed by: prof. RNDr. Antonín Kučera, Ph.D., učo 2508. Changed: 17/6/2016 10:25.
Abstract
We study Markov decision processes (MDPs) with multiple limit-average (or mean-payoff) functions. We consider two different objectives, namely, expectation and satisfaction objectives. Given an MDP with k limit-average functions, in the expectation objective the goal is to maximize the expected limit-average value, and in the satisfaction objective the goal is to maximize the probability of runs such that the limit-average value stays above a given vector. We show that under the expectation objective, in contrast to the case of one limit-average function, both randomization and memory are necessary for strategies even for epsilon-approximation, and that finite-memory randomized strategies are sufficient for achieving Pareto optimal values. Under the satisfaction objective, in contrast to the case of one limit-average function, infinite memory is necessary for strategies achieving a specific value (i.e. randomized finite-memory strategies are not sufficient), whereas memoryless randomized strategies are sufficient for epsilon-approximation, for all epsilon>0. We further prove that the decision problems for both expectation and satisfaction objectives can be solved in polynomial time and the trade-off curve (Pareto curve) can be epsilon-approximated in time polynomial in the size of the MDP and 1/epsilon, and exponential in the number of limit-average functions, for all epsilon>0. Our analysis also reveals flaws in previous work for MDPs with multiple mean-payoff functions under the expectation objective, corrects the flaws, and allows us to obtain improved results.
Links
GPP202/12/P612, research and development projectName: Formální verifikace stochastických systémů s reálným časem (Acronym: Formální verifikace stochastických systémů s reáln)
Investor: Czech Science Foundation
PrintDisplayed: 5/5/2024 08:25