TY - GEN
T1 - Deciding on an adjustment for multiplicity in IR experiments
AU - Boytsov, Leonid
AU - Belova, Anna
AU - Westfall, Peter
PY - 2013
Y1 - 2013
N2 - We evaluate statistical inference procedures for small-scale IR experiments that involve multiple comparisons against the baseline. These procedures adjust for multiple comparisons by ensuring that the probability of observing at least one false positive in the experiment is below a given threshold. We use only publicly available test collections and make our software available for download. In particular, we employ the TREC runs and runs constructed from the Microsoft learning-to-rank (MSLR) data set. Our focus is on non-parametric statistical procedures that include the Holm-Bonferroni adjustment of the permutation test p-values, the MaxT permutation test, and the permutation-based closed testing. In TREC-based simulations, these procedures retain from 66% to 92% of individually significant results (i.e., those obtained without taking other comparisons into account). Similar retention rates are observed in the MSLR simulations. For the largest evaluated query set size (i.e., 6400), procedures that adjust for multiplicity find at most 5% fewer true differences compared to unadjusted tests. At the same time, unadjusted tests produce many more false positives.
AB - We evaluate statistical inference procedures for small-scale IR experiments that involve multiple comparisons against the baseline. These procedures adjust for multiple comparisons by ensuring that the probability of observing at least one false positive in the experiment is below a given threshold. We use only publicly available test collections and make our software available for download. In particular, we employ the TREC runs and runs constructed from the Microsoft learning-to-rank (MSLR) data set. Our focus is on non-parametric statistical procedures that include the Holm-Bonferroni adjustment of the permutation test p-values, the MaxT permutation test, and the permutation-based closed testing. In TREC-based simulations, these procedures retain from 66% to 92% of individually significant results (i.e., those obtained without taking other comparisons into account). Similar retention rates are observed in the MSLR simulations. For the largest evaluated query set size (i.e., 6400), procedures that adjust for multiplicity find at most 5% fewer true differences compared to unadjusted tests. At the same time, unadjusted tests produce many more false positives.
KW - Holm-Bonferroni
KW - MaxT
KW - Multiple comparisons
KW - Permutation test
KW - Randomization test
KW - Statistical significance
KW - T-test
UR - http://www.scopus.com/inward/record.url?scp=84883059175&partnerID=8YFLogxK
U2 - 10.1145/2484028.2484034
DO - 10.1145/2484028.2484034
M3 - Conference contribution
AN - SCOPUS:84883059175
SN - 9781450320344
T3 - SIGIR 2013 - Proceedings of the 36th International ACM SIGIR Conference on Research and Development in Information Retrieval
SP - 403
EP - 412
BT - SIGIR 2013 - Proceedings of the 36th International ACM SIGIR Conference on Research and Development in Information Retrieval
T2 - 36th International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR 2013
Y2 - 28 July 2013 through 1 August 2013
ER -