Inverse Reinforcement Learning Based Stochastic Driver Behavior Learning

Mehmet F. Ozkan, Abishek J. Rocque, Yao Ma

Research output: Contribution to journalConference articlepeer-review

Abstract

Drivers have unique and rich driving behaviors when operating vehicles in traffic. This paper presents a novel driver behavior learning approach that captures the uniqueness and richness of human driver behavior in realistic driving scenarios. A stochastic inverse reinforcement learning (SIRL) approach is proposed to learn a distribution of cost function, which represents the richness of the human driver behavior with a given set of driver-specific demonstrations. Evaluations are conducted on the realistic driving data collected from the 3D driver-in-the-loop driving simulation. The results show that the learned stochastic driver model is capable of expressing the richness of the human driving strategies under different realistic driving scenarios. Compared to the deterministic baseline driver behavior model, the results reveal that the proposed stochastic driver behavior model can better replicate the driver’s unique and rich driving strategies in a variety of traffic conditions.

Original languageEnglish
Pages (from-to)882-888
Number of pages7
JournalIFAC-PapersOnLine
Volume54
Issue number20
DOIs
StatePublished - Nov 1 2021
Event2021 Modeling, Estimation and Control Conference, MECC 2021 - Austin, United States
Duration: Oct 24 2021Oct 27 2021

Keywords

  • Driver behavior modeling
  • Inverse reinforcement learning

Fingerprint

Dive into the research topics of 'Inverse Reinforcement Learning Based Stochastic Driver Behavior Learning'. Together they form a unique fingerprint.

Cite this