BRÁZDIL, Tomáš, Ezio BARTOCCI, Dimitrios MILIOS, Guido SANGUINETTI and Luca BORTOLUSSI. Policy learning in continuous-time Markov decision processes using Gaussian Processes. Performance Evaluation. 2017, vol. 116, No 1, p. 84-100. ISSN 0166-5316. Available from: https://dx.doi.org/10.1016/j.peva.2017.08.007.
Other formats:   BibTeX LaTeX RIS
Basic information
Original name Policy learning in continuous-time Markov decision processes using Gaussian Processes
Authors BRÁZDIL, Tomáš (203 Czech Republic, guarantor, belonging to the institution), Ezio BARTOCCI (380 Italy), Dimitrios MILIOS (300 Greece), Guido SANGUINETTI (380 Italy) and Luca BORTOLUSSI (380 Italy).
Edition Performance Evaluation, 2017, 0166-5316.
Other information
Original language English
Type of outcome Article in a journal
Field of Study 10201 Computer sciences, information science, bioinformatics
Country of publisher Netherlands
Confidentiality degree is not subject to a state or trade secret
WWW URL
Impact factor Impact factor: 1.786
RIV identification code RIV/00216224:14330/17:00107689
Organization unit Faculty of Informatics
Doi http://dx.doi.org/10.1016/j.peva.2017.08.007
UT WoS 000413797400005
Keywords in English continuous-time Markov decision processes; reachability; gradient descent
Tags International impact, Reviewed
Changed by Changed by: RNDr. Pavel Šmerk, Ph.D., učo 3880. Changed: 28/4/2020 07:51.
Abstract
Continuous-time Markov decision processes provide a very powerful mathematical framework to solve policy-making problems in a wide range of applications, ranging from the control of populations to cyber–physical systems. The key problem to solve for these models is to efficiently compute an optimal policy to control the system in order to maximise the probability of satisfying a set of temporal logic specifications. Here we introduce a novel method based on statistical model checking and an unbiased estimation of a functional gradient in the space of possible policies. Our approach presents several advantages over the classical methods based on discretisation techniques, as it does not assume the a-priori knowledge of a model that can be replaced by a black-box, and does not suffer from state-space explosion. The use of a stochastic moment-based gradient ascent algorithm to guide our search considerably improves the efficiency of learning policies and accelerates the convergence using the momentum term. We demonstrate the strong performance of our approach on two examples of non-linear population models: an epidemiology model with no permanent recovery and a queuing system with non-deterministic choice.
Links
GA15-17564S, research and development projectName: Teorie her jako prostředek pro formální analýzu a verifikaci počítačových systémů
Investor: Czech Science Foundation
PrintDisplayed: 11/10/2024 14:27