TY - JOUR
T1 - A Novel Optimal Bipartite Consensus Control Scheme for Unknown Multi-Agent Systems via Model-Free Reinforcement Learning
AU - Ghosh, Bijoy
AU - Peng, Zhinan
N1 - Funding Information:
This work is partially supported by National Science Foundation of China under Grants Nos. 61703060 , 61473061 , 61104104 , the Opening Fund of Geomathematics Key Laboratory of Sichuan Province (scsxdz2018zd02 and scsxdz2018zd04), the Fundamental Research Funds for the Central Universities, Southwest Minzu University ( 2019NQN07 ) and the Program for New Century Excellent Talents in University under Grant No. NCET-13-0091.
Funding Information:
This work is partially supported by National Science Foundation of China under Grants Nos. 61703060,61473061, 61104104, the Opening Fund of Geomathematics Key Laboratory of Sichuan Province (scsxdz2018zd02 and scsxdz2018zd04), the Fundamental Research Funds for the Central Universities, Southwest Minzu University (2019NQN07) and the Program for New Century Excellent Talents in University under Grant No. NCET-13-0091.
Publisher Copyright:
© 2019 Elsevier Inc.
PY - 2020
Y1 - 2020
N2 - In this paper, the optimal bipartite consensus control (OBCC) problem is investigated for unknown multi-agent systems (MASs) with coopetition networks. A novel distributed OBCC scheme is proposed based on model-free reinforcement learning method to achieve OBCC, where the agent's dynamics are no longer required. First, The coopetition networks are applied to establish the cooperative and competitive interactions among agents, and then the OBCC problem is formulated by introducing local neighbor bipartite consensus errors and performance index functions (PIFs) for each agent. Second, in order to obtain the OBCC laws, a policy iteration algorithm (PIA) is employed to learn the solutions to discrete-time (DT) Hamilton-Jacobi-Bellman (HJB) equations. Third, to implement the proposed methods, we adopt a data-driven actor-critic-based neural networks (NNs) framework to approximate the control laws and the PIFs, respectively, in an online learning manner. Finally, some simulation results are given to demonstrate the effectiveness of the developed approaches.
AB - In this paper, the optimal bipartite consensus control (OBCC) problem is investigated for unknown multi-agent systems (MASs) with coopetition networks. A novel distributed OBCC scheme is proposed based on model-free reinforcement learning method to achieve OBCC, where the agent's dynamics are no longer required. First, The coopetition networks are applied to establish the cooperative and competitive interactions among agents, and then the OBCC problem is formulated by introducing local neighbor bipartite consensus errors and performance index functions (PIFs) for each agent. Second, in order to obtain the OBCC laws, a policy iteration algorithm (PIA) is employed to learn the solutions to discrete-time (DT) Hamilton-Jacobi-Bellman (HJB) equations. Third, to implement the proposed methods, we adopt a data-driven actor-critic-based neural networks (NNs) framework to approximate the control laws and the PIFs, respectively, in an online learning manner. Finally, some simulation results are given to demonstrate the effectiveness of the developed approaches.
M3 - Article
JO - Applied Mathematics and Computation
JF - Applied Mathematics and Computation
ER -