BACKGROUND No method of standard setting for objective structured clinical examinations (OSCEs) is perfect. Using scores aggregated across stations risks allowing students who are incompetent in some core skills to pass an examination, which may not be acceptable for high stakes assessments.
AIM To assess the feasibility of using a factor analysis of station scores in a high stakes OSCE to derive measures of underlying competencies.
METHODS A 12-station OSCE was administered to all 192 students in the penultimate undergraduate year at the University of Aberdeen Medical School. Analysis of the correlation table of station scores was used to exclude stations performing unreliably. Factor analysis of the remaining station scores was carried out to characterise the underlying competencies being assessed. Factor scores were used to derive pass/fail cut-off scores for the examination.
RESULTS Four stations were identified as having unpredicted variations in station scores. Analysis of the content of these stations allowed the underlying problems with the station designs to be isolated. Factor analysis of the remaining 8 stations revealed 3 main underlying factors, accounting for 53% of the total variance in scores. These were labelled 'examination skills', 'communication skills' and 'history taking skills'.
CONCLUSION Factor analysis is a useful tool for characterising and quantifying the skills that are assessed in an OSCE. Standard setting procedures can be used to calculate cut-off scores for each underlying factor.
- education, medical, undergraduate, standards
- educational measurement
- factor analysis, statistical
- clinical competence, standards
- STRUCTURED CLINICAL EXAMINATION