Sensor networks (SN), and in general, unmanned system networks (UMSN) are currently one of the strategic areas of research in different disciplines, such as communications, controls, and mechanics. These networks can potentially consist of a large number of agents, such as unmanned aerial vehicles (UAV), unmanned ground vehicles (UGV), unmanned underwater vehicles (UUV), and satellites. Wireless UMSN provide significant capabilities and numerous applications in various fields of research are being considered and developed. Some of these applications are in home and building automation, intelligent transportation systems, health monitoring and assisting, space explorations, and commercial applications (Sinopoli, Sharp, Schenato, Schafferthim, & Sastry, 2003). There are also military applications in intelligence, surveillance, and reconnaissance (ISR) missions in the presence of environmental disturbances, vehicle failures, and in battlefields subject to unanticipated uncertainties and adversarial actions (Bošković, Li, & Mehra, 2002).
One of the prerequisites for these networked agents that are intended to be deployed in challenging missions is team cooperation and coordination for accomplishing predefined goals and requirements. Cooperation in a network of unmanned systems, known as formation, network agreement, flocking, consensus, or swarming in different contexts, has received extensive attention in the past several years. Several approaches to this problem have been investigated within different frameworks and by considering different architectures (Arcak, 2006, Gazi, 2002, Lee and Spong, 2006, Olfati-Saber and Murray, 2002, Olfati-Saber and Murray, 2003b, Olfati-Saber and Murray, 2003c, Olfati-Saber and Murray, 2004, Paley et al., 2004, Ren, 2007, Ren and Beard, 2004, Semsar and Khorasani, 2006, Stipanović et al., 2004 and Xiao and Wang, 2007).
An optimal approach to team cooperation problem is considered in Raffard, Tomlin, and Boyd (2004) and Inalhan, Stipanović, and Tomlin (2002) for formation keeping and in Bauso, Giarre, and Pesenti (2006) and Semsar-Kazerooni and Khorasani, 2007a, Semsar-Kazerooni and Khorasani, 2007b, Semsar-Kazerooni and Khorasani, 2007c, Semsar-Kazerooni and Khorasani, 2008 and Semsar-Kazerooni and Khorasani, 2009 for consensus seeking. The approach in Inalhan et al. (2002) is based on individual agent cost optimization for achieving team goals under the assumption that the states of the other team members are constant. The concept of Nash equilibrium is used for design of optimal controllers. In order to solve an optimal consensus problem, the authors in Bauso et al. (2006) have assumed an individual agent cost for each team member. In evaluating the minimum value of each individual cost, the states of the other agents are assumed to be constant. The work in Semsar-Kazerooni and Khorasani, 2007a, Semsar-Kazerooni and Khorasani, 2007b, Semsar-Kazerooni and Khorasani, 2007c, Semsar-Kazerooni and Khorasani, 2008 and Semsar-Kazerooni and Khorasani, 2009 have avoided and removed the above restricting assumptions by decomposing the control input of each team member into local and global components. The global component (referred to as the interaction term) is designed such that individual agent cost function is minimized in a distributed manner. In all the above referenced work the optimal problem is based on the individual cost definition for team members. However, to the best of the authors’ knowledge, a single team cost function formulation has been proposed in only a few works (Fax, 2002, Raffard et al., 2004 and Semsar and Khorasani, 2007). In Fax (2002), optimal control strategy is applied for formation keeping and a single team cost function is utilized. The authors in Raffard et al. (2004) assumed a distributed optimization technique for formation control in a leader–follower structure. The design is based on dual decomposition of the local and global constraints. However, in that approach, the velocity and position commands are assumed to be available to the entire team. In Semsar and Khorasani (2007), a centralized solution is obtained by using a game theoretic approach.
It is worth noting that a very few work in this domain use a design-based approach. In fact, many of the earlier work in the literature have focused on analysis only (Jadbabaie et al., 2003, Olfati-Saber, 2006, Olfati-Saber and Murray, 2004, Ren and Beard, 2005 and Tanner et al., 2007). However, the main contribution of this paper is to introduce a novel design-based approach to formally design a controller (the consensus algorithm) to address the output consensus over a common value using a single team cost function within a game theoretic framework. The advantage of minimizing a cost function that describes the total performance of the team is that it can provide a better insight into performance of the entire team when compared to individual agent performance indices. However, the potential main disadvantage of this formulation is clearly the requirement of availability of full information set for control design purpose. In the present work this problem is alleviated and the imposed information structure of the team is taken into account by using a linear matrix inequality (LMI) formulation. For this purpose, a decentralized optimal control strategy that was initially introduced in Semsar-Kazerooni and Khorasani, 2007b and Semsar-Kazerooni and Khorasani, 2008 is used to design controllers based on minimization of individual costs. Since in this approach the solution is obtained through minimization of local cost functions we have at most a person-by-person optimality. However, if a cost function describing the total performance is minimized a lower team cost as well as lower individual costs may be achieved. Subsequently, the idea of cooperative game theory is used to minimize a team cost function which is a linear combination of the cost functions that are used in the optimal approach. This will guarantee that individual cost functions have the minimum possible values for the given team mission. To obtain a solution that is subject to a given information structure as well as to guarantee consensus achievement, a set of LMIs is used to constrain the controller to be designed for the entire team.
The organization of the paper is as follows: In Section 2, background information is presented. In Section 3, cooperative game theory is introduced. Application of the game theory to the multi-agent team problem, the design of a semi-decentralized optimal control, and solutions to the corresponding min–max problem are presented in Section 4. Finally, simulation results are conducted and conclusions are stated in Sections 5 and 6, respectively.