Policies Grow on Trees: Model Checking Families of MDPs
The result's identifiers
Result code in IS VaVaI
<a href="https://www.isvavai.cz/riv?ss=detail&h=RIV%2F00216305%3A26230%2F25%3APU154852" target="_blank" >RIV/00216305:26230/25:PU154852 - isvavai.cz</a>
Result on the web
—
DOI - Digital Object Identifier
—
Alternative languages
Result language
angličtina
Original language name
Policies Grow on Trees: Model Checking Families of MDPs
Original language description
Markov decision processes (MDPs) provide a fundamental model for sequential decision making under process uncertainty. A classical synthesis task is to compute for a given MDP a winning policy that achieves a desired specification. However, at design time, one typically needs to consider a family of MDPs modelling various system variations. For a given family, we study synthesising (1) the subset of MDPs where a winning policy exists and (2) a preferably small number of winning policies that together cover this subset. We introduce policy trees that concisely capture the synthesis result. The key ingredient for synthesising policy trees is a recursive application of a game-based abstraction. We combine this abstraction with an efficient refinement procedure and a post-processing step. An extensive empirical evaluation demonstrates superior scalability of our approach compared to naive baselines. For one of the benchmarks, we find 246 winning policies covering 94 million MDPs. Our algorithm
Czech name
—
Czech description
—
Classification
Type
O - Miscellaneous
CEP classification
—
OECD FORD branch
10201 - Computer sciences, information science, bioinformathics (hardware development to be 2.2, social aspect to be 5.8)
Result continuities
Project
—
Continuities
S - Specificky vyzkum na vysokych skolach
Others
Publication year
2025
Confidentiality
S - Úplné a pravdivé údaje o projektu nepodléhají ochraně podle zvláštních právních předpisů