TY - GEN
T1 - Data-Driven Reinforcement Learning for Walking Assistance Control of a Lower Limb Exoskeleton with Hemiplegic Patients
AU - Peng, Zhinan
AU - Luo, Rui
AU - Huang, Rui
AU - Hu, Jiangping
AU - Shi, Kecheng
AU - Cheng, Hong
AU - Ghosh, Bijoy Kumar
N1 - Publisher Copyright:
© 2020 IEEE.
PY - 2020/5
Y1 - 2020/5
N2 - Lower limb exoskeleton (LLE) has received considerable interests in strength augmentation, rehabilitation and walking assistance scenarios. For walking assistance, the LLE is expected to have the capability of controlling the affected leg to track the unaffected leg's motion naturally. An important issue in this scenario is that the exoskeleton system needs to deal with unpredictable disturbance from the patient, which requires the controller of exoskeleton system to have the ability to adapt to different wearers. This paper proposes a novel Data-Driven Reinforcement Learning (DDRL) control strategy to adapt different hemiplegic patients with unpredictable disturbances. In the proposed DDRL strategy, the interaction between two lower limbs of LLE and the legs of hemiplegic patient are modeled in the context of leader-follower framework. The walking assistance control problem is transformed into a optimal control problem. Then, a policy iteration (PI) algorithm is introduced to learn optimal controller. To achieve online adaptation control for different patients, based on PI algorithm, an Actor-Critic Neural Network (ACNN) technology of the reinforcement learning (RL) is employed in the proposed DDRL. We conduct experiments both on a simulation environment and a real LLE system. Experimental results demonstrate that the proposed control strategy has strong robustness against disturbances and adaptability to different pilots.
AB - Lower limb exoskeleton (LLE) has received considerable interests in strength augmentation, rehabilitation and walking assistance scenarios. For walking assistance, the LLE is expected to have the capability of controlling the affected leg to track the unaffected leg's motion naturally. An important issue in this scenario is that the exoskeleton system needs to deal with unpredictable disturbance from the patient, which requires the controller of exoskeleton system to have the ability to adapt to different wearers. This paper proposes a novel Data-Driven Reinforcement Learning (DDRL) control strategy to adapt different hemiplegic patients with unpredictable disturbances. In the proposed DDRL strategy, the interaction between two lower limbs of LLE and the legs of hemiplegic patient are modeled in the context of leader-follower framework. The walking assistance control problem is transformed into a optimal control problem. Then, a policy iteration (PI) algorithm is introduced to learn optimal controller. To achieve online adaptation control for different patients, based on PI algorithm, an Actor-Critic Neural Network (ACNN) technology of the reinforcement learning (RL) is employed in the proposed DDRL. We conduct experiments both on a simulation environment and a real LLE system. Experimental results demonstrate that the proposed control strategy has strong robustness against disturbances and adaptability to different pilots.
KW - Actor-Critic Neural Network
KW - Data-driven Control
KW - Hemiplegic Patients
KW - Leader-Follower Multi-Agent System
KW - Lower Limb Exoskeleton
KW - Reinforcement Learning
UR - http://www.scopus.com/inward/record.url?scp=85092748766&partnerID=8YFLogxK
U2 - 10.1109/ICRA40945.2020.9197229
DO - 10.1109/ICRA40945.2020.9197229
M3 - Conference contribution
AN - SCOPUS:85092748766
T3 - Proceedings - IEEE International Conference on Robotics and Automation
SP - 9065
EP - 9071
BT - 2020 IEEE International Conference on Robotics and Automation, ICRA 2020
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2020 IEEE International Conference on Robotics and Automation, ICRA 2020
Y2 - 31 May 2020 through 31 August 2020
ER -