Towards Automated Diagnosis with Attentive Multi-modal Learning Using Electronic Health Records and Chest X-Rays
| Authors | |
|---|---|
| Publication date | 2020 |
| Book title | Multimodal Learning for Clinical Decision Support and Clinical Image-Based Procedures |
| Book subtitle | 10th International Workshop, ML-CDS 2020, and 9th International Workshop, CLIP 2020, held in conjunction with MICCAI 2020, Lima, Peru, October 4–8, 2020 : proceedings |
| ISBN |
|
| ISBN (electronic) |
|
| Series | Lecture Notes in Computer Science |
| Event | 10th International Workshop on Multimodal Learning for Clinical Decision Support |
| Pages (from-to) | 106-114 |
| Publisher | Cham: Springer |
| Organisations |
|
| Abstract |
Jointly learning from Electronic Health Records (EHR) and medical images is a promising area of research in deep learning for medical imaging. Using the context available in EHR together with medical images can lead to more efficient data usage. Recent work has shown that jointly learning from EHR and medical images can indeed improve performance on several tasks. Current methods are however still not independent of clinician input. To obtain an automated method only prior patient information should be used together with a medical image, without the reliance on further clinician input. In this paper we propose an automated multi-modal method which creates a joint feature representation based on prior patient information from EHR and associated X-ray scan. This feature representation, which joins the two different modalities through attention leverages the contextual relationship between the modalities. This method is used to perform two tasks: diagnosis classification and free-text diagnosis generation. We show the benefit of the multi-modal approach over single-modality approaches on both tasks.
|
| Document type | Conference contribution |
| Language | English |
| Published at | https://doi.org/10.1007/978-3-030-60946-7_11 |
| Permalink to this page | |
