D 2015

Counterexample Explanation by Learning Small Strategies in Markov Decision Processes

BRÁZDIL, Tomáš, Krishnendu CHATTERJEE, Martin CHMELÍK, Andreas FELLNER, Jan KŘETÍNSKÝ et. al.

Basic information

Original name

Counterexample Explanation by Learning Small Strategies in Markov Decision Processes

Authors

BRÁZDIL, Tomáš (203 Czech Republic, belonging to the institution), Krishnendu CHATTERJEE (356 India), Martin CHMELÍK (203 Czech Republic), Andreas FELLNER (40 Austria) and Jan KŘETÍNSKÝ (203 Czech Republic, guarantor, belonging to the institution)

Edition

Cham, Computer Aided Verification: 27th International Conference, CAV 2015, p. 158-177, 20 pp. 2015

Publisher

Springer

Other information

Language

English

Type of outcome

Stať ve sborníku

Field of Study

10201 Computer sciences, information science, bioinformatics

Country of publisher

Germany

Confidentiality degree

není předmětem státního či obchodního tajemství

Publication form

printed version "print"

Impact factor

Impact factor: 0.402 in 2005

RIV identification code

RIV/00216224:14330/15:00080918

Organization unit

Faculty of Informatics

ISBN

978-3-319-21689-8

ISSN

UT WoS

000364182900010

Keywords in English

stochastic systems; verification; machine learning; decision tree

Tags

International impact, Reviewed
Změněno: 28/4/2016 14:21, RNDr. Pavel Šmerk, Ph.D.

Abstract

V originále

For deterministic systems, a counterexample to a property can simply be an error trace, whereas counterexamples in probabilistic systems are necessarily more complex. For instance, a set of erroneous traces with a sufficient cumulative probability mass can be used. Since these are too large objects to understand and manipulate, compact representations such as subchains have been considered. In the case of probabilistic systems with non-determinism, the situation is even more complex. While a subchain for a given strategy (or scheduler, resolving non-determinism) is a straightforward choice, we take a different approach. Instead, we focus on the strategy itself, and extract the most important decisions it makes, and present its succinct representation. The key tools we employ to achieve this are (1) introducing a concept of importance of a state w.r.t. the strategy, and (2) learning using decision trees. There are three main consequent advantages of our approach. Firstly, it exploits the quantitative information on states, stressing the more important decisions. Secondly, it leads to a greater variability and degree of freedom in representing the strategies. Thirdly, the representation uses a self-explanatory data structure. In summary, our approach produces more succinct and more explainable strategies, as opposed to e.g. binary decision diagrams. Finally, our experimental results show that we can extract several rules describing the strategy even for very large systems that do not fit in memory, and based on the rules explain the erroneous behaviour.

Links

GBP202/12/G061, research and development project
Name: Centrum excelence - Institut teoretické informatiky (CE-ITI) (Acronym: CE-ITI)
Investor: Czech Science Foundation