Input-Output Data-Based Output Antisynchronization Control of Multi-Agent Systems Using Reinforcement Learning Approach

Zhinan Peng, Yiyi Zhao, Jiangping Hu, Rui Luo, Bijoy Kumar Ghosh, Sing Kiong Nguang

Research output: Contribution to journalArticlepeer-review

Abstract

This paper investigates an output antisynchronization problem of multi-agent systems by using an input-output data-based reinforcement learning approach. Till now, most of the existing results on antisynchronization problems required full state information and exact system dynamics in the controller design, which is always invalid in practical scenarios. To address this issue, a new system representation is constructed by using just the available input/output data from the multi-agent system. Then, a novel value iteration (VI) algorithm is proposed to compute the optimal control laws for the agents; moreover, a convergence analysis is presented for the proposed algorithm. In the implementation of the data-based controllers, an actor-critic (AC) network structure is established to learn the optimal control laws without the requirement of information of the agent dynamics. An incremental weight updating rule is proposed to improve the learning performance. Finally, simulation results are presented to demonstrate the effectiveness of the proposed antisynchronization control strategy.

Original languageEnglish
JournalIEEE Transactions on Industrial Informatics
DOIs
StateAccepted/In press - 2021

Keywords

  • Artificial neural networks
  • Heuristic algorithms
  • Informatics
  • Input-output data
  • Multi-agent systems
  • Network topology
  • Optimal control
  • Synchronization
  • incremental actor-critic network
  • optimal antisynchronization
  • partially observable multi-agent systems
  • reinforcement learning

Fingerprint Dive into the research topics of 'Input-Output Data-Based Output Antisynchronization Control of Multi-Agent Systems Using Reinforcement Learning Approach'. Together they form a unique fingerprint.

Cite this