Learning reward frequency over reward probability: A tale of two learning rules

Hilary J. Don, A. Ross Otto, Astin C. Cornwall, T. Davis, Darrell A. Worthy

Research output: Contribution to journalArticlepeer-review

11 Scopus citations

Abstract

Learning about the expected value of choice alternatives associated with reward is critical for adaptive behavior. Although human choice preferences are affected by the presentation frequency of reward-related alternatives, this may not be captured by some dominant models of value learning, such as the delta rule. In this study, we examined whether reward learning is driven more by learning the probability of reward provided by each option, or how frequently each option has been rewarded, and assess how well models based on average reward (e.g. the delta model) and models based on cumulative reward (e.g. the decay model) can account for choice preferences. In a binary-outcome choice task, participants selected between pairs of options that had reward probabilities of 0.65 (A) versus 0.35 (B) or 0.75 (C) versus 0.25 (D). Crucially, during training there were twice the number of AB trials as CD trials, such that option A was associated with higher cumulative reward, while option C gave higher average reward. Participants then decided between novel combinations of options (e.g., AC). Most participants preferred option A over C, a result predicted by the Decay model, but not the Delta model. We also compared the Delta and Decay models to both more simplified as well as more complex models that assumed additional mechanisms, such as representation of uncertainty. Overall, models that assume learning about cumulative reward provided the best account of the data.

Original languageEnglish
Article number104042
JournalCognition
Volume193
DOIs
StatePublished - Dec 2019

Keywords

  • Decay rule
  • Delta rule
  • Prediction error
  • Probability learning
  • Reinforcement learning
  • Reward frequency

Fingerprint

Dive into the research topics of 'Learning reward frequency over reward probability: A tale of two learning rules'. Together they form a unique fingerprint.

Cite this