Další formáty:
BibTeX
LaTeX
RIS
@inproceedings{1306807, author = {Brázdil, Tomáš and Chatterjee, Krishnendu and Chmelík, Martin and Fellner, Andreas and Křetínský, Jan}, address = {Cham}, booktitle = {Computer Aided Verification: 27th International Conference, CAV 2015}, doi = {http://dx.doi.org/10.1007/978-3-319-21690-4_10}, editor = {Daniel Kroening, Corina Pasareanu}, keywords = {stochastic systems; verification; machine learning; decision tree}, howpublished = {tištěná verze "print"}, language = {eng}, location = {Cham}, isbn = {978-3-319-21689-8}, pages = {158-177}, publisher = {Springer}, title = {Counterexample Explanation by Learning Small Strategies in Markov Decision Processes}, year = {2015} }
TY - JOUR ID - 1306807 AU - Brázdil, Tomáš - Chatterjee, Krishnendu - Chmelík, Martin - Fellner, Andreas - Křetínský, Jan PY - 2015 TI - Counterexample Explanation by Learning Small Strategies in Markov Decision Processes PB - Springer CY - Cham SN - 9783319216898 KW - stochastic systems KW - verification KW - machine learning KW - decision tree N2 - For deterministic systems, a counterexample to a property can simply be an error trace, whereas counterexamples in probabilistic systems are necessarily more complex. For instance, a set of erroneous traces with a sufficient cumulative probability mass can be used. Since these are too large objects to understand and manipulate, compact representations such as subchains have been considered. In the case of probabilistic systems with non-determinism, the situation is even more complex. While a subchain for a given strategy (or scheduler, resolving non-determinism) is a straightforward choice, we take a different approach. Instead, we focus on the strategy itself, and extract the most important decisions it makes, and present its succinct representation. The key tools we employ to achieve this are (1) introducing a concept of importance of a state w.r.t. the strategy, and (2) learning using decision trees. There are three main consequent advantages of our approach. Firstly, it exploits the quantitative information on states, stressing the more important decisions. Secondly, it leads to a greater variability and degree of freedom in representing the strategies. Thirdly, the representation uses a self-explanatory data structure. In summary, our approach produces more succinct and more explainable strategies, as opposed to e.g. binary decision diagrams. Finally, our experimental results show that we can extract several rules describing the strategy even for very large systems that do not fit in memory, and based on the rules explain the erroneous behaviour. ER -
BRÁZDIL, Tomáš, Krishnendu CHATTERJEE, Martin CHMELÍK, Andreas FELLNER a Jan KŘETÍNSKÝ. Counterexample Explanation by Learning Small Strategies in Markov Decision Processes. In Daniel Kroening, Corina Pasareanu. \textit{Computer Aided Verification: 27th International Conference, CAV 2015}. Cham: Springer, 2015, s.~158-177. ISBN~978-3-319-21689-8. Dostupné z: https://dx.doi.org/10.1007/978-3-319-21690-4\_{}10.
|