+353-1-416-8900REST OF WORLD
+44-20-3973-8888REST OF WORLD
1-917-300-0470EAST COAST U.S
1-800-526-8630U.S. (TOLL FREE)


Markov Decision Processes. Discrete Stochastic Dynamic Programming. Edition No. 1. Wiley Series in Probability and Statistics

  • ID: 2175818
  • Book
  • February 2005
  • 684 Pages
  • John Wiley and Sons Ltd
The Wiley-Interscience Paperback Series consists of selected books that have been made more accessible to consumers in an effort to increase global appeal and general circulation. With these new unabridged softcover volumes, Wiley hopes to extend the lives of these works by making them available to future generations of statisticians, mathematicians, and scientists.

"This text is unique in bringing together so many results hitherto found only in part in other texts and papers. . . . The text is fairly self-contained, inclusive of some basic mathematical results needed, and provides a rich diet of examples, applications, and exercises. The bibliographical material at the end of each chapter is excellent, not only from a historical perspective, but because it is valuable for researchers in acquiring a good perspective of the MDP research potential."
—Zentralblatt fur Mathematik

". . . it is of great value to advanced-level students, researchers, and professional practitioners of this field to have now a complete volume (with more than 600 pages) devoted to this topic. . . . Markov Decision Processes: Discrete Stochastic Dynamic Programming represents an up-to-date, unified, and rigorous treatment of theoretical and computational aspects of discrete-time Markov decision processes."
—Journal of the American Statistical Association

Note: Product cover images may vary from those shown

1. Introduction.

2. Model Formulation.

3. Examples.

4. Finite-Horizon Markov Decision Processes.

5. Infinite-Horizon Models: Foundations.

6. Discounted Markov Decision Problems.

7. The Expected Total-Reward. Criterion.

8. Average Reward and Related Criteria.

9. The Average Reward Criterion-Multichain and Communicating Models.

10. Sensitive Discount Optimality.

11. Continuous-Time Models.



Appendix A. Markov Chains.

Appendix B. Semicontinuous Functions.

Appendix C. Normed Linear Spaces.

Appendix D. Linear Programming.



Note: Product cover images may vary from those shown
Martin L. Puterman University of British Columbia.
Note: Product cover images may vary from those shown