دانلود مقاله ISI انگلیسی شماره 111787
ترجمه فارسی عنوان مقاله

برنامه ریزی پویا تقریبی برای کنترل آتش نشانی مداخله ی دفاع موشکی

عنوان انگلیسی
Approximate dynamic programming for missile defense interceptor fire control
کد مقاله سال انتشار تعداد صفحات مقاله انگلیسی
111787 2017 14 صفحه PDF
منبع

Publisher : Elsevier - Science Direct (الزویر - ساینس دایرکت)

Journal : European Journal of Operational Research, Volume 259, Issue 3, 16 June 2017, Pages 873-886

ترجمه کلمات کلیدی
برنامه ریزی پویا تقریبی اختلافات زمانی مربیان کمتر، فرایندهای تصمیم گیری مارکوف، برنامه های نظامی، مشکل تخصیص هدف سلاح،
کلمات کلیدی انگلیسی
Approximate dynamic programming; Least squares temporal differences; Markov decision processes; Military applications; Weapon target assignment problem;
پیش نمایش مقاله
پیش نمایش مقاله  برنامه ریزی پویا تقریبی برای کنترل آتش نشانی مداخله ی دفاع موشکی

چکیده انگلیسی

Given the ubiquitous nature of both offensive and defensive missile systems, the catastrophe-causing potential they represent, and the limited resources available to countries for missile defense, optimizing the defensive response to a missile attack is a necessary national security endeavor. For a single salvo of offensive missiles launched at a set of targets, a missile defense system protecting those targets must determine how many interceptors to fire at each incoming missile. Since such missile engagements often involve the firing of more than one attack salvo, we develop a Markov decision process (MDP) model to examine the optimal fire control policy for the defender. Due to the computational intractability of using exact methods for all but the smallest problem instances, we utilize an approximate dynamic programming (ADP) approach to explore the efficacy of applying approximate methods to the problem. We obtain policy insights by analyzing subsets of the state space that reflect a range of possible defender interceptor inventories. Testing of four instances derived from a representative planning scenario demonstrates that the ADP policy provides high-quality decisions for a majority of the state space, achieving a 7.74% mean optimality gap over all states for the most realistic instance, modeling a longer-term engagement by an attacker who assesses the success of each salvo before launching a subsequent one. Moreover, the ADP algorithm requires only a few minutes of computational effort versus hours for the exact dynamic programming algorithm, providing a method to address more complex and realistically-sized instances.