A Novel Optimal Bipartite Consensus Control Scheme for Unknown Multi-Agent Systems via Model-Free Reinforcement Learning

Bijoy Ghosh, Zhinan Peng

Research output: Contribution to journalArticlepeer-review

72 Scopus citations

Abstract

In this paper, the optimal bipartite consensus control (OBCC) problem is investigated for unknown multi-agent systems (MASs) with coopetition networks. A novel distributed OBCC scheme is proposed based on model-free reinforcement learning method to achieve OBCC, where the agent's dynamics are no longer required. First, The coopetition networks are applied to establish the cooperative and competitive interactions among agents, and then the OBCC problem is formulated by introducing local neighbor bipartite consensus errors and performance index functions (PIFs) for each agent. Second, in order to obtain the OBCC laws, a policy iteration algorithm (PIA) is employed to learn the solutions to discrete-time (DT) Hamilton-Jacobi-Bellman (HJB) equations. Third, to implement the proposed methods, we adopt a data-driven actor-critic-based neural networks (NNs) framework to approximate the control laws and the PIFs, respectively, in an online learning manner. Finally, some simulation results are given to demonstrate the effectiveness of the developed approaches.

Original languageEnglish
JournalApplied Mathematics and Computation
StatePublished - 2020

Fingerprint

Dive into the research topics of 'A Novel Optimal Bipartite Consensus Control Scheme for Unknown Multi-Agent Systems via Model-Free Reinforcement Learning'. Together they form a unique fingerprint.

Cite this