Files

Abstract

This report presents the optimal control approach to dynamic optimization. The presentation begins with a simple two-period problem and a level of analysis that should be familiar to anyone who has had an intermediate level course in price theory. The form of this problem is changed slightly to lead smoothly to the development of the Maximum Principle of optimal control for many discrete time periods. This discrete time example is presented in a way that shows clearly the meaning of the Maximum Prinicple in continuous time. An example closely related to the examples given for optimization in discrete time is used to introduce the fundamentals of optimal control in continuous time and the use of phase diagrams for describing important characteristics of the solution. Appendixes provide a unified treatment of constrained optimization, nonlinear programming, and generalize the statement of the Optimal Control problem from the particular examples presented in the body of the report.

Details

PDF

Statistics

from
to
Export
Download Full History