Clinical Text Summarization: Adapting Large
Language Models Can Outperform Human Experts


Dave
Van Veen
Cara
Van Uden
Louis
Blankemeier
Jean-Benoit
Delbrouck
Asad
Aali
Christian
Bluethgen
Anuj
Pareek
Malgorzata
Polacin
William
Collins
Neera
Ahuja
Curtis P.
Langlotz
Jason
Hom
Sergios
Gatidis
John
Pauly
Akshay
Chaudhari

Stanford University





[Paper]
[BibTeX]
[Code]



Abstract

Sifting through vast textual data and summarizing key information imposes a substantial burden on how clinicians allocate their time. Although large language models (LLMs) have shown immense promise in natural language processing (NLP) tasks, their efficacy across diverse clinical summarization tasks has not yet been rigorously examined.

In this work, we employ domain adaptation methods on eight LLMs, spanning six datasets and four distinct summarization tasks: radiology reports, patient questions, progress notes, and doctor-patient dialogue. Our thorough quantitative assessment reveals trade-offs between models and adaptation methods in addition to instances where recent advances in LLMs may not lead to improved results. Further, in a clinical reader study with six physicians, we depict that summaries from the best adapted LLM are preferable to human summaries in terms of completeness and correctness. Our ensuing qualitative analysis delineates mutual challenges faced by both LLMs and human experts. Lastly, we correlate traditional quantitative NLP metrics with reader study scores to enhance our understanding of how these metrics align with physician preferences.

Our research marks the first evidence of LLMs outperforming human experts in clinical text summarization across multiple tasks. This implies that integrating LLMs into clinical workflows could alleviate documentation burden, empowering clinicians to focus more on personalized patient care and other irreplaceable human aspects of medicine.






Model win rate: a head-to-head winning percentage of each model combination, where red/blue intensities highlight the degree to which models on the vertical axis outperform models on the horizontal axis. GPT-4 generally achieves the best performance. While FLAN-T5 is more competitive for syntactic metrics such as BLEU, we note this model is constrained to shorter context lengths (see Table 1). When aggregated across datasets, seq2seq models (FLAN-T5, FLAN-UL2) outperform open-source autoregressive models (Llama-2, Vicuna) on all metrics.





Quantitative metric (MEDCON) scores vs. number of in-context examples across models and datasets. We also include the best model fine-tuned with QLoRA (FLAN-T5) as a horizontal dashed line for valid datasets. Zero-shot prompting (0 examples) often yields considerably inferior results, underscoring the need for adaptation methods. Note the allowable number of in-context examples varies signficantly by model context length and dataset size. See the paper for more details and results across other metrics (BLEU, ROUGE-L, BERTScore).





Clinical reader study. Top: Study design comparing the summarization of GPT-4 vs. that of human experts on three attributes: completeness, correctness, and conciseness. Bottom: Results. GPT-4 summaries are rated higher than human summaries on completeness for all three summarization tasks and on correctness overall. Radiology reports highlight a trade-off between correctness (better) and conciseness (worse) with GPT-4. Highlight colors correspond to a value’s location on the color spectrum. Asterisks denote statistical significance by Wilcoxon signed-rank test, *p-value < 0.05, **p-value << 0.001.





Distribution of reader scores for each summarization task across evaluated attributes (completeness, correctness, conciseness). Horizontal axes denote reader preference between GPT-4 and human summaries as measured by a five-point Likert scale. Vertical axes denote frequency count, with 900 total reports for each plot. GPT-4 summaries are more often preferred in terms of correctness and completeness. While the largest gain in correctness occurs on radiology reports, this introduces a trade-off with conciseness.





Annotation of two radiologist report examples from the reader study. In the top example, GPT-4 performs better due to a laterality mistake by the human expert. In the bottom example, GPT-4 exhibits a lack of conciseness. The table (lower left) contains reader scores for these two examples and the task average across all samples.





Spearman correlation coefficients between NLP metrics and reader preference assessing completeness, correctness, and conciseness. The semantic metric (BERTScore) and conceptual metric (MEDCON) correlate most highly with correctness. Meanwhile, syntactic metrics BLEU and ROUGE-L correlate most with completeness. Section 5.3 contains further description and discussion.




Paper

D. Van Veen, C. Van Uden, L. Blankemeier,
J.B. Delbrouck, A. Aali, C. Bluethgen,
A. Pareek, M. Polacin, W. Collins
N. Ahuja, C.P. Langlotz, J. Hom,
S. Gatidis, J. Pauly, A.S. Chaudhari
Clinical Text Summarization: Adapting Large Language Models Can Outperform Human Experts
2023. (hosted on ArXiv)


[Bibtex]



Acknowledgements

We’re grateful to both Narasimhan Balasubramanian and the Accelerate Foundation Models Academic Research (AFMAR) program at Microsoft, who both provided Azure OpenAI credits. Further compute support was provided by One Medical, which Asad Aali used as part of his summer internship. Curtis Langlotz is supported by NIH grants R01 HL155410, R01 HL157235, by AHRQ grant R18HS026886, by the Gordon and Betty Moore Foundation, and by the National Institute of Biomedical Imaging and Bioengineering (NIBIB) under contract 75N92020C00021. Akshay Chaudhari receives support from NIH grants R01 HL167974, R01 AR077604, R01 EB002524, R01 AR079431, and P41 EB027060; from NIH contracts 75N92020C00008 and 75N92020C00021; and from GE Healthcare, Philips, and Amazon.




This template was originally created by Phillip Isola and Richard Zhang for a colorful ECCV project; the code can be found here.