Static and dynamic optimization. Constrained and unconstrained optimal control. Method of Lagrange Multipliers; minimization of a function subject to algebraic constraints. Calculus of Variations; minimization of a functional subject to differential, integral and terminal constraints. Pontryagin’s Maximum Principle; optimization under control constraints, locally optimal feed-forward controllers, bang-bang control, time-optimal control and singular optimal control. Bellman’s Dynamic Programming; Hamilton-Jacobi-Bellman equation and globally optimal feedback controllers. Linear Quadratic Regulators; Riccati equation for time-varying and time-invariant systems. Numerical methods for solving non-linear optimal control problems.
By the end of the course, students will be able to:
- Understand and appreciate the analogy between differentiation and variation.
- Understand and appreciate the analogy between minimizations of functions and functionals.
- Be able to formulate unconstrained and constrained optimization problems.
- Be able to derive first order optimality conditions for unconstrained and constrained static and dynamic optimization problems.
- Demonstrate understanding of Pontryagin’s Maximum Principle and Bellman’s Dynamic Programming; be able to find locally optimal feed-forward and globally optimal feed-back controllers.
- Understand the working principle of the numerical methods introduced to solve non-linear optimal control problems and be able to apply these to solve practical engineering problems.
Final exam, Mid-term, Projects, Assignments, Participation