TY - JOUR
T1 - Input-Output Data-Based Output Antisynchronization Control of Multiagent Systems Using Reinforcement Learning Approach
AU - Peng, Zhinan
AU - Zhao, Yiyi
AU - Hu, Jiangping
AU - Luo, Rui
AU - Ghosh, Bijoy Kumar
AU - Nguang, Sing Kiong
N1 - Funding Information:
Manuscript received August 19, 2020; revised November 22, 2020; accepted January 6, 2021. Date of publication January 12, 2021; date of current version July 26, 2021. This work was supported in part by the National Natural Science Foundation of China under Grant 61473061, Grant 61104104, and Grant 71503206, in part by Program for New Century Excellent Talents in University under Grant NCET-13-0091, and in part by Sichuan Science and Technology Program under Grant 2020YFSY0012. Paper no. TII-20-3974. (Corresponding author: Jiang-ping Hu.) Zhinan Peng, Jiangping Hu, and Rui Luo are with the School of Automation Engineering, University of Electronic Science and Technology of China, Chengdu 611731, China (e-mail: zhinanpeng@126.com; hjp_lzu@163.com; nicole9922@163.com).
Publisher Copyright:
© 2005-2012 IEEE.
PY - 2021/11
Y1 - 2021/11
N2 - This article investigates an output antisynchronization problem of multiagent systems by using an input-output data-based reinforcement learning approach. Till now, most of the existing results on antisynchronization problems required full-state information and exact system dynamics in the controller design, which is always invalid in practical scenarios. To address this issue, a new system representation is constructed by using just the available input/output data from the multiagent system. Then, a novel value iteration algorithm is proposed to compute the optimal control laws for the agents; moreover, a convergence analysis is presented for the proposed algorithm. In the implementation of the data-based controllers, an actor-critic network structure is established to learn the optimal control laws without the requirement of information of the agent dynamics. An incremental weight updating rule is proposed to improve the learning performance. Finally, simulation results are presented to demonstrate the effectiveness of the proposed antisynchronization control strategy.
AB - This article investigates an output antisynchronization problem of multiagent systems by using an input-output data-based reinforcement learning approach. Till now, most of the existing results on antisynchronization problems required full-state information and exact system dynamics in the controller design, which is always invalid in practical scenarios. To address this issue, a new system representation is constructed by using just the available input/output data from the multiagent system. Then, a novel value iteration algorithm is proposed to compute the optimal control laws for the agents; moreover, a convergence analysis is presented for the proposed algorithm. In the implementation of the data-based controllers, an actor-critic network structure is established to learn the optimal control laws without the requirement of information of the agent dynamics. An incremental weight updating rule is proposed to improve the learning performance. Finally, simulation results are presented to demonstrate the effectiveness of the proposed antisynchronization control strategy.
KW - Incremental actor-critic (AC) network
KW - Input-output data
KW - Optimal antisynchronization
KW - Partially observable multiagent systems
KW - Reinforcement learning (RL)
UR - http://www.scopus.com/inward/record.url?scp=85099542606&partnerID=8YFLogxK
U2 - 10.1109/TII.2021.3050768
DO - 10.1109/TII.2021.3050768
M3 - Article
AN - SCOPUS:85099542606
VL - 17
SP - 7359
EP - 7367
JO - IEEE Transactions on Industrial Informatics
JF - IEEE Transactions on Industrial Informatics
SN - 1551-3203
IS - 11
M1 - 9321152
ER -