Consultation Checklists: Standardising the Human Evaluation of Medical Note Generation

Aleksandar Savkov, Francesco Moramarco, Alex Papadopoulos Korfiatis, Mark Perera, Anya Belz, Ehud Reiter

Research output: Chapter in Book/Report/Conference proceedingPublished conference contribution

2 Citations (Scopus)
5 Downloads (Pure)

Abstract

Evaluating automatically generated text is generally hard due to the inherently subjective nature of many aspects of the output quality. This difficulty is compounded in automatic consultation note generation by differing opinions between medical experts both about which patient statements should be included in generated notes and about their respective importance in arriving at a diagnosis. Previous real-world evaluations of note-generation systems saw substantial disagreement between expert evaluators. In this paper we propose a protocol that aims to increase objectivity by grounding evaluations in Consultation Checklists, which are created in a preliminary step and then used as a common point of reference during quality assessment. We observed good levels of inter-annotator agreement in a first evaluation study using the protocol; further, using Consultation Checklists produced in the study as reference for automatic metrics such as ROUGE or BERTScore improves their correlation with human judgements compared to using the original human note.
Original languageEnglish
Title of host publicationProceedings of the 2022 Conference on Empirical Methods in Natural Language Processing: Industry Track
Place of PublicationAbu Dhabi, UAE
PublisherAssociation for Computational Linguistics
Pages111-120
Number of pages10
Publication statusPublished - 1 Dec 2022

Fingerprint

Dive into the research topics of 'Consultation Checklists: Standardising the Human Evaluation of Medical Note Generation'. Together they form a unique fingerprint.

Cite this