J 2014

Markov Decision Processes with Multiple Long-Run Average Objectives

BRÁZDIL, Tomáš, Václav BROŽEK, Krishnendu CHATTERJEE, Vojtěch FOREJT, Antonín KUČERA et. al.

Basic information

Original name

Markov Decision Processes with Multiple Long-Run Average Objectives

Authors

BRÁZDIL, Tomáš (203 Czech Republic, belonging to the institution), Václav BROŽEK (203 Czech Republic, belonging to the institution), Krishnendu CHATTERJEE (356 India), Vojtěch FOREJT (203 Czech Republic, belonging to the institution) and Antonín KUČERA (203 Czech Republic, guarantor, belonging to the institution)

Edition

Logical Methods in Computer Science, Technical University of Braunschweig, 2014, 1860-5974

Other information

Language

English

Type of outcome

Článek v odborném periodiku

Field of Study

10201 Computer sciences, information science, bioinformatics

Country of publisher

Germany

Confidentiality degree

není předmětem státního či obchodního tajemství

References:

Impact factor

Impact factor: 0.357

RIV identification code

RIV/00216224:14330/14:00074494

Organization unit

Faculty of Informatics

UT WoS

000333744700001

Keywords in English

Markov decision processes; mean-payoff reward; multi-objective optimisation; formal verification

Tags

International impact, Reviewed
Změněno: 17/6/2016 10:25, prof. RNDr. Antonín Kučera, Ph.D.

Abstract

V originále

We study Markov decision processes (MDPs) with multiple limit-average (or mean-payoff) functions. We consider two different objectives, namely, expectation and satisfaction objectives. Given an MDP with k limit-average functions, in the expectation objective the goal is to maximize the expected limit-average value, and in the satisfaction objective the goal is to maximize the probability of runs such that the limit-average value stays above a given vector. We show that under the expectation objective, in contrast to the case of one limit-average function, both randomization and memory are necessary for strategies even for epsilon-approximation, and that finite-memory randomized strategies are sufficient for achieving Pareto optimal values. Under the satisfaction objective, in contrast to the case of one limit-average function, infinite memory is necessary for strategies achieving a specific value (i.e. randomized finite-memory strategies are not sufficient), whereas memoryless randomized strategies are sufficient for epsilon-approximation, for all epsilon>0. We further prove that the decision problems for both expectation and satisfaction objectives can be solved in polynomial time and the trade-off curve (Pareto curve) can be epsilon-approximated in time polynomial in the size of the MDP and 1/epsilon, and exponential in the number of limit-average functions, for all epsilon>0. Our analysis also reveals flaws in previous work for MDPs with multiple mean-payoff functions under the expectation objective, corrects the flaws, and allows us to obtain improved results.

Links

GPP202/12/P612, research and development project
Name: Formální verifikace stochastických systémů s reálným časem (Acronym: Formální verifikace stochastických systémů s reáln)
Investor: Czech Science Foundation