Abstract
Five inferential methods employed in single-case studies to compare a case to controls are examined; all of these make use of a t-distribution. It is shown that three of these ostensibly different methods are in fact strictly equivalent and are not fit for purpose; they are associated with grossly inflated Type I errors (these exceed even the error rate obtained when a case’s score is converted to a z score and the latter used as a test statistic). When used as significance tests, the two remaining methods (Crawford and Howell’s method and a prediction interval method first used by Barton and colleagues) are also equivalent and achieve control of the Type I error rate (the two methods do differ however in other important aspects). A number of broader issues also arise from the present findings, namely: (a) they underline the value of accompanying significance test results with the effect size for the difference between a case and controls, (b) they suggest that less care is often taken over statistical methods than over other aspects of single-case studies, and (c) they indicate that some neuropsychologists have a distorted conception of the nature of hypothesis testing in single-case research (it is argued that this may stem from a failure to distinguish between group studies and single-case studies).
Original language | English |
---|---|
Pages (from-to) | 1009-1016 |
Number of pages | 8 |
Journal | Cortex |
Volume | 48 |
Issue number | 8 |
Early online date | 23 Jul 2011 |
DOIs | |
Publication status | Published - Sept 2012 |
Keywords
- single-case methods
- case-controls design
- t-tests
- neuropsychological methods