Input-Output Data-Based Output Antisynchronization Control of Multiagent Systems Using Reinforcement Learning Approach

Zhinan Peng, Yiyi Zhao, Jiangping Hu, Rui Luo, Bijoy Kumar Ghosh, Sing Kiong Nguang

Research output: Contribution to journalArticlepeer-review

5 Scopus citations

Abstract

This article investigates an output antisynchronization problem of multiagent systems by using an input-output data-based reinforcement learning approach. Till now, most of the existing results on antisynchronization problems required full-state information and exact system dynamics in the controller design, which is always invalid in practical scenarios. To address this issue, a new system representation is constructed by using just the available input/output data from the multiagent system. Then, a novel value iteration algorithm is proposed to compute the optimal control laws for the agents; moreover, a convergence analysis is presented for the proposed algorithm. In the implementation of the data-based controllers, an actor-critic network structure is established to learn the optimal control laws without the requirement of information of the agent dynamics. An incremental weight updating rule is proposed to improve the learning performance. Finally, simulation results are presented to demonstrate the effectiveness of the proposed antisynchronization control strategy.

Original languageEnglish
Article number9321152
Pages (from-to)7359-7367
Number of pages9
JournalIEEE Transactions on Industrial Informatics
Volume17
Issue number11
DOIs
StatePublished - Nov 2021

Keywords

  • Incremental actor-critic (AC) network
  • Input-output data
  • Optimal antisynchronization
  • Partially observable multiagent systems
  • Reinforcement learning (RL)

Fingerprint

Dive into the research topics of 'Input-Output Data-Based Output Antisynchronization Control of Multiagent Systems Using Reinforcement Learning Approach'. Together they form a unique fingerprint.

Cite this