Abstract
Automatic summarisation has the potential to aid physicians in streamlining clerical tasks such as note taking. But it is notoriously difficult to evaluate these systems and demonstrate that they are safe to be used in a clinical setting. To circumvent this issue, we propose a semi-automatic approach whereby physicians post-edit generated notes before submitting them. We conduct a preliminary study on the time saving of automatically generated consultation notes with post-editing. Our evaluators are asked to listen to mock consultations and to post-edit three generated notes. We time this and find that it is faster than writing the note from scratch. We present insights and lessons learnt from this experiment.
Original language | English |
---|---|
Title of host publication | Proceedings of the Workshop on Human Evaluation of NLP Systems (HumEval) |
Subtitle of host publication | EACL 2021 |
Editors | Anya Belz, Shubham Agarwal, Yvette Graham, Ehud Reiter, Anastasia Shimorina |
Publisher | ACL |
Pages | 62-68 |
Number of pages | 7 |
ISBN (Print) | 978-1-954085-10-7 |
Publication status | Published - 19 Apr 2021 |
Event | Workshop on Human Evaluation of NLP Systems - virtual Duration: 19 Apr 2021 → 19 Apr 2021 https://www.virtual2021.eacl.org/workshop_WS-5.html |
Workshop
Workshop | Workshop on Human Evaluation of NLP Systems |
---|---|
Period | 19/04/21 → 19/04/21 |
Internet address |