We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
An abstract is not available for this content so a preview has been provided. Please use the Get access link above for information on how to access this content.
Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)
Article purchase
Temporarily unavailable
References
[1]
[1]Boel, R. and Varaiya, P. (1977) Optimal control of jump processes. SIAM J. Control Optimization15, 92–119.Google Scholar
[2]
[2]Dubins, L. E. and Savage, L. J. (1965) How to Gamble if You Must: Inequalities for Stochastic Processes.McGraw-Hill, New York.Google Scholar
[3]
[3]Furuwaka, N. and Iwamoto, S. (1973) Markovian decision processes with recursive reward function. Bull. Math. Statist.15, 79–91.CrossRefGoogle Scholar
[4]
[4]Gavish, B. and Schweitzer, P. J. (1976) An optimality principle for Markovian decision processes. J. Math. Anal. Appl.54, 173–184.Google Scholar
[5]
[5]Hordijk, A. (1974) Dynamic Programming and Markov Potential Theory.Mathematical Centre Tracts No. 51, Amsterdam.Google Scholar
[6]
[6]Kreps, D. M. (1977) Decision problems with expected utility criteria, I: Upper and lower convergent utility. Maths Opns Res.2, 45–53.CrossRefGoogle Scholar
[7]
[7]Selten, R. (1975) Reexamination of the perfectness concept for equilibrium points in extensive games. Internat. J. Game Theory4, 25–55.CrossRefGoogle Scholar
[8]
[8]Striebel, C. (1975) Optimal Control Of Discrete Time Stochastic Systems.Lecture Notes in Economics and Mathematical Systems 110, Springer Verlag, Berlin.CrossRefGoogle Scholar