Tablet computers have emerged as increasingly useful tools in medical education, particularly for assessment. However, it is not fully established whether tablet computers influence the quality and/or quantity of feedback provided in high stakes assessments. It is also unclear how electronicallyrecorded feedback relates to student performance. Our primary aim was to determine whether differences existed in feedback depending on the tool used to record it. Methods We compared quantitative and qualitative feedback between paper-scoring sheets versus iPads™ across two consecutive years of a final year MBChB (UK medical degree) Objective Structured Clinical Examination. Quality of comments (using a validated five-point rating scale), number of examiner comments and number of words were compared across both methods of recording assessment performance using chi-squared analysis and independent t-test. We also explored relationships between student performance (checklist and global scoring) and feedback. Results Data from 190 students (2850 paper scored interactions) in 2015 and 193 (2895 iPad™ scored interactions) in 2016 were analysed. Overall, a greater number of comments were given with iPad™ compared to written (42% versus 20%; p < 0.001) but the quality of feedback did not differ significantly. For both written and electronic feedback, students with low global scores were more likely to receive comments (p < 0.001). Conclusion The use of iPads™ in high stakes assessment increases the quantity of feedback compared to traditional paper scoring sheets. The quantity and quality of feedback for poorer performing candidates (by global score) were also better with iPad™ feedback.
|Number of pages||5|
|Journal||Journal of the Royal College of Physicians of Edinburgh|
|Publication status||Published - 31 Dec 2018|
- Objective structured clinical examination
- Tablet computers