Optimal control
Imagine we’re trying to minimize J=θ(y(tF),tF)
subject to both the state equations: ˙y=f(y(t),u(t)), Note these are continuous, applying at all times t.
and boundary conditions: ψ(y(tF),u(tF),tF)=0. These are discrete, applying only at fixed time $t_F $. We have to put the continuous constraint in as an integral:
ˆJ=[ϕ+νTψ]tF−∫tFtIλT(˙y−f(y(t),u(t)))dt
compare to the Lagrangian minimizing F(x) under the constraint c(x)=0
F(x)−λTc(x)=F(x)−m∑i=1λici(x)
We can add the discrete boundary conditions in with classic Lagrange multiplier
Differences between Hamiltonian of Pontryagin and that in mechanics. Why didn’t anyone tell me about this paper!
Sample problem
from page 353 (10.6.10), Judd
Maximize the objective function: maxc∫T0e−ρtu(c)dt
with w(t) wage rate at time t A(t) assets at ime t, and f(A) the return on invested assets
From the statement of the problem, we have: the Hamiltonian: H=u(c)+λ(f(A)+w(t)−c) and the costate eqn: ˙λ=λ(ρ−f′(A)) and bdry conditions.
assuming u concave in c, the maximum principle gives us: the first order condition that: 0=∂cH=u′(c)−λ
So we’re left with a nice ODE system: ˙A=f(A)+w(t)−C(λ)
Let’s assume functional forms for the utility: u(c)=c1+γ/(1+γ)
Then our system becomes ˙A=rA+w(t)−c(t)
From the max principle of Hamiltonian, first order condition 0=∂cH
and our second equation is still: ˙A=rA+w(t)−c(t)
So a nice ODE BVP we’re ready to solve.