Total reward variance in discrete and continuous time Markov chains
The result's identifiers
Result code in IS VaVaI
<a href="https://www.isvavai.cz/riv?ss=detail&h=RIV%2F67985556%3A_____%2F05%3A00411326" target="_blank" >RIV/67985556:_____/05:00411326 - isvavai.cz</a>
Result on the web
—
DOI - Digital Object Identifier
—
Alternative languages
Result language
angličtina
Original language name
Total reward variance in discrete and continuous time Markov chains
Original language description
This note studies the variance of total cumulative rewards for Markov reward chains in both discrete and continuous time. It is shown that parallel results can be obtained for both cases. First, explicit formulae are presented for the variance within finite time. Next, the infinite time horizon is considered. Most notably, it is concluded that the variance has a linear growth rate. Explicit expressions are provided, related to the standard average reward case, to compute this growth rate.
Czech name
—
Czech description
—
Classification
Type
D - Article in proceedings
CEP classification
BB - Applied statistics, operational research
OECD FORD branch
—
Result continuities
Project
Result was created during the realization of more than one project. More information in the Projects tab.
Continuities
Z - Vyzkumny zamer (s odkazem do CEZ)
Others
Publication year
2005
Confidentiality
S - Úplné a pravdivé údaje o projektu nepodléhají ochraně podle zvláštních právních předpisů
Data specific for result type
Article name in the collection
Operations Research Proceedings 2004
ISBN
978-3-540-27679-1
ISSN
—
e-ISSN
—
Number of pages
8
Pages from-to
—
Publisher name
Springer
Place of publication
Berlin
Event location
Tilburg
Event date
Sep 1, 2004
Type of event by nationality
WRD - Celosvětová akce
UT code for WoS article
—