Filters
Markov decision chains in discrete- and continuous-time; a unified approach
In this note we consider Markov decision chains with finite state space in discrete- and continuous-time setting for discounting and averaging optimality criteria. Connections between discounted and averag...
AH - Ekonomie
- 2010 •
- D
Rok uplatnění
D - Stať ve sborníku
Risk-Sensitive Average Optimality in Markov Decision Chains
We focus attention on of the asymptotic behavior of the expected utility and the corresponding certainty equivalents in discrete-time Markov decision chains with finite state and action spaces and the risk...
AH - Ekonomie
- 2008 •
- D
Rok uplatnění
D - Stať ve sborníku
Second Order Optimality in Markov and Semi-Markov Decision Processes
Semi-Markov decision processes can be considered as an extension of discrete- and continuous-time Markov reward models. Unfortunately, traditional optimality criteria special classes of semi-Markov
Statistics and probability
- 2019 •
- D
Rok uplatnění
D - Stať ve sborníku
Risk-Sensitive and Risk-Neutral Optimality in Markov Decision Chains; a Unified Approach
In this note we consider Markov decision chains with finite state space and compact actions spaces where the stream of rewards generated by the Markov processes is evaluated by an exponential utility function (so-c...
BB - Aplikovaná statistika, operační výzkum
- 2012 •
- D
Rok uplatnění
D - Stať ve sborníku
Monotonicity and comparison results for nonnegative dynamic systems.
Monotonicity and comparison results are known to be most useful for the qualitative analysis of Markov chain applications. In this paper, the extension of such results will be studied for the more general class of dynamic systems go...
BB - Aplikovaná statistika, operační výzkum
- 2001 •
- D
Rok uplatnění
D - Stať ve sborníku
The Variance of Discounted Rewards in Markov Decision Processes: Laurent Expansion and Sensitive Optimality
In this paper we consider discounted Markov decision processes with finite state space and compact actions spaces. We present formulas for the variance of total expected discounted rewards along with its partial Laurent expansion. T...
BB - Aplikovaná statistika, operační výzkum
- 2014 •
- D
Rok uplatnění
D - Stať ve sborníku
Mean variance optimality in Markov decision chains
In this note, we consider discrete-time Markov decision processes with finite state space. Recalling explicit formulas for the growth rate of expected value and variance of the cumulative (random) reward, algorithm...
BB - Aplikovaná statistika, operační výzkum
- 2005 •
- D
Rok uplatnění
D - Stať ve sborníku
On the Total Reward Variance for Continuous-Time Markov Reward Chains
As an extension of the discrete-time case, this note investigates the variance of the total cumulative reward for continuous-time Markov reward chains with finite state spaces. The results correspond to
BB - Aplikovaná statistika, operační výzkum
- 2006 •
- Jx
Rok uplatnění
Jx - Nezařazeno - Článek v odborném periodiku (Jimp, Jsc a Jost)
Risk Sensitive and Mean Variance Optimality in Markov Decision Processes
In this note, we compare two aproaches for handling risk-variability features arising in discrete-time Markov decision processes: models with exponential utility function and mean variance optimality models. Comput...
AH - Ekonomie
- 2008 •
- D
Rok uplatnění
D - Stať ve sborníku
An Application of Graph theory in Markov chains reliability analysis
company. The aim of the paper is to present the usage of discrete time Markov chains and the flow in network approach. Discrete Markov chains a well-known method of stochastic in practic...
BB - Aplikovaná statistika, operační výzkum
- 2014 •
- Jx •
- Link
Rok uplatnění
Jx - Nezařazeno - Článek v odborném periodiku (Jimp, Jsc a Jost)
Výsledek na webu
- 1 - 10 out of 188 605