Limited interactions between space-and feature-based attention in visually sparse displays

Guangsheng Liang, Miranda Scolari

Research output: Contribution to journalArticlepeer-review

1 Scopus citations

Abstract

Top-down visual attention selectively filters sensory input so relevant information receives preferential processing. Feature-based attention (FBA) enhances the representation of relevant low-level features, whereas space-based attention (SBA) enhances information at relevant location(s). The present study investigates whether the unique influences of SBA and FBA combine to facilitate behavior in a perceptually demanding discrimination task. We first demonstrated that, independently, both color and location pre-cues could effectively direct attention to facilitate perceptual decision making of a target. We then examined the combined effects of SBA and FBA in the same design by deploying a predictive color arrow pre-cue. Only SBA effects were observed in performance accuracy and reaction time. However, we detected a reaction time cost when a valid spatial cue was paired with a feature cue. A computational perceptual decision-making model largely provided converging evidence that contributions from FBA were restricted to facilitating the speed with which the relevant item was identified. Our results suggest that both selection mechanisms can be used in isolation to resolve a perceptually challenging target in a sparse display, but with little additive perceptual benefit when cued simultaneously. We conclude that there is at least some higher order interdependence between space-based and feature-based selection during decision making under specific conditions.

Original languageEnglish
Article number5
JournalJournal of Vision
Volume20
Issue number4
DOIs
StatePublished - Apr 1 2020

Keywords

  • EZ-diffusion model
  • Feature-based attention
  • Perceptual decision making
  • Space-based attention

Fingerprint

Dive into the research topics of 'Limited interactions between space-and feature-based attention in visually sparse displays'. Together they form a unique fingerprint.

Cite this