Deterministic Dynamic Programming in Discrete Time: A Monotone Convergence Principle


We consider infinite-horizon deterministic dynamic programming problems in discrete time. We show that the value function is always a fixed point of a modified version of the Bellman operator. We also show that value iteration monotonically converges to the value function if the initial function is dominated by the value function, is mapped upward by the modified Bellman operator, and satisfies a transversality-like condition. These results require no assumption except for the general framework of infinite-horizon deterministic dynamic programming.


Dynamic programming, Bellman operator, Fixed point, Value iteration


Research Institute for Economics and Business Administration,
Kobe University
Rokkodai-cho, Nada-ku, Kobe
657-8501 Japan
Phone: +81-78-803-7036
FAX: +81-78-803-7059

Masayuki YAO
Graduate School of Economics, Keio University