Risk-Sensitive and Mean Variance Optimality in Markov Decision Processes
Year: 2013 Volume: 7 Issue: 3 Pages: 146-161
Abstract: In this paper we consider unichain Markov decision processes with finite state space and compact actions spaces where the stream of rewards generated by the Markov processes is evaluated by an exponential utility function with a given risk sensitivity coefficient (so-called risk-sensitive models). If the risk sensitivity coefficient equals zero (risk-neutral case) we arrive at a standard Markov decision process. Then we can easily obtain necessary and sufficient mean reward optimality conditions and the variability can be evaluated by the mean variance of total expected rewards. For the risk-sensitive case we establish necessary and sufficient optimality conditions for maximal (or minimal) growth rate of expectation of the exponential utility function, along with mean value of the corresponding certainty equivalent, that take into account not only the expected values of the total reward but also its higher moments.
JEL classification: C44, C61
Keywords: Discrete-time Markov decision chains, exponential utility functions, certainty equivalent, mean-variance optimality, connections between risk-sensitive and risk-neutral models
|[PDF]||Print Recommend to others|