Determination of Fire Control Policies via Approximate Dynamic Programming
Technical Report,01 Sep 2014,24 Mar 2016
AIR FORCE INSTITUTE OF TECHNOLOGY WRIGHT-PATTERSON AFB OH WRIGHT-PATTERSON AFB United States
Pagination or Media Count:
Given the ubiquitous nature of offensive and defensive missile systems, the catastrophe-causing potential they represent, and the limited resources available to countries for missile defense, optimizing the response to a missile attack is a necessary endeavor. For a single salvo of offensive missiles launched at a set of targets, a missile defense system must decide how many interceptors to fire at each missile. Since such missile engagements often involve the firing of more than one attack salvo, we develop a Markov decision process MDP model to examine the optimal fire control policy for the defender. Due to the computational intractability of using exact methods for all but the smallest instances, we utilize an approximate dynamic programming ADP approach to explore the efficacy of applying approximate methods. We obtain policy insights by analyzing subsets of the state space that reflect a range of defender interceptor inventories. Testing of four scenarios demonstrates that the ADP policy provides high-quality decisions for a majority of the state space, achieving a 7.74 mean optimality gap. Moreover, computational effort for the ADP algorithm requires only a few minutes versus 12 hours for the exact DP algorithm, providing a method to address more complex and realistically-sized instances.
- Operations Research
- Antimissile Defense Systems