OBJECTIVE: The sequential objective structured clinical exam (OSCE) is a stand-alone variation of the traditional OSCE whereby all students sit a screening test. Those who pass this initial assessment undergo no further testing while weakly performing students sit an additional (sequential) test to determine their overall pass/fail status. Our aim was to determine outcomes of adopting a sequential OSCE approach using different numbers of screening stations and pass marks.
METHOD: We carried out a retrospective, observational study of anonymised databases of two cohorts of student outcomes from the final OSCE examination at the University of Aberdeen Medical School. Data were accessed for students (n = 388) who sat the exam in the years 2013-2014. We used Stata simulate program to compare outcomes - in terms of sensitivity and specificity - across 5000 random selections of 6-14 OSCE stations using random selections of groups of 100 students (with different screening test pass marks) versus those obtained across 15 stations.
RESULTS: Across 6-14 stations, the sensitivity was ≥87% in 2013 and ≥84% in 2014 while the specificity ranged from 60% to 100% in both years. Specificity generally increased as the number of screening stations increased (with concomitant narrowing of the 95% confidence interval), while sensitivity varied between 84 and 98%. Similar sensitivities and specificities were found with screening pass marks of +1, +2 and +3 standard errors of measurement (SEM). Eight stations as a screening test appeared to be a reasonable compromise in terms of high sensitivity (88-89%) and specificity (83-86%).
CONCLUSION: This research extends current sequential OSCE literature using a novel and robust approach to identify the "ideal" in terms of number of screening stations and pass mark. We discuss the educational and resource implications of our findings and make recommendations for the use of the sequential OSCE in medical education.