Static and dynamic optimization. Constrained and unconstrained optimal control. Method of Lagrange Multipliers; minimization of a function subject to algebraic constraints. Calculus of Variations; minimization of a functional subject to differential, integral and terminal constraints. Pontryagin’s Maximum Principle; optimization under control constraints, locally optimal feed-forward controllers, bang-bang control, time-optimal control and singular optimal control. Bellman’s Dynamic Programming; Hamilton-Jacobi-Bellman equation and globally optimal feedback controllers. Linear Quadratic Regulators; Riccati equation for time-varying and time-invariant systems. Numerical methods for solving non-linear optimal control problems.

12 Credits

Instructor

David Braun

Components

Final exam, Mid-term, Projects, Assignments, Participation

Image Credit