Development of an in-vivo metric to aid visual inspection of single-case design data: Do we need to run more sessions?

Lucy Barnard-Brak, David M. Richman, Todd D. Little, Zhanxia Yang

Research output: Contribution to journalArticlepeer-review

9 Scopus citations

Abstract

Comparing visual inspection results of graphed data reveals inconsistencies in the interpretation of the same graph among single-case experimental design (SCED) researchers and practitioners. Although several investigators have disseminated structured criteria and visual inspection aids or strategies, inconsistencies in interpreting graphed data continue to exist even for individuals considered to be experts at interpreting SCED graphs. We propose a fail safe k metric that can be used in conjunction with visual inspection, and it can be used in-vivo after each additional data point is collected within a phase to determine the optimal point in time to shift between phases (e.g., from baseline to treatment). Preliminary proof of concept data are presented to demonstrate the potential utility of the fail safe k metric with a sample of previously published SCED graphs examining the effects on noncontingent reinforcement on occurrences of problem behavior. Results showed that if the value of fail safe k is equal to or less than the number of sessions in the current phase, then the data path may not be stable and more sessions should be run before changing phases. We discuss the results in terms of using the fail safe k as an additional aid for visual inspection of SCED data.

Original languageEnglish
Pages (from-to)8-15
Number of pages8
JournalBehaviour Research and Therapy
Volume102
DOIs
StatePublished - Mar 2018

Keywords

  • Fail safe k
  • Fail safe n
  • Single-case experimental designs
  • Visual inspection aid

Fingerprint Dive into the research topics of 'Development of an in-vivo metric to aid visual inspection of single-case design data: Do we need to run more sessions?'. Together they form a unique fingerprint.

Cite this