OPTIMAL CONTROL OF LINEAR SYSTEMS WITH TIME LAG.
ILLINOIS UNIV URBANA COORDINATED SCIENCE LAB
Pagination or Media Count:
New results are given which permit a numerical solution of the optimal regulator problem for systems governed by linear differential-difference equations in which the optimization interval is finite. An iterative algorithm which assures convergence to the optimum is derived from the necessary and sufficient conditions for optimality. This algorithm requires neither the choice of an initial control nor the choice of a convergence parameter. The conditions for optimality are derived in two forms an integral equation and a coupled set of differential-difference equations. Numerical examples for this are presented. Author