Non-Local Attention Improves Description Generation for Retinal Images
| Authors |
|
|---|---|
| Publication date | 2022 |
| Book title | Proceedings, 2022 IEEE Winter Conference on Applications of Computer Vision |
| Book subtitle | 4-8 January 2022, Waikoloa, Hawaii |
| ISBN |
|
| ISBN (electronic) |
|
| Series | WACV |
| Event | 2022 IEEE/CVF Winter Conference on Applications of Computer Vision |
| Pages (from-to) | 3250-3259 |
| Publisher | Los Alamitos, California: Conference Publishing Services, IEEE Computer Society |
| Organisations |
|
| Abstract |
Automatically generating medical reports from retinal images is a difficult task in which an algorithm must generate semantically coherent descriptions for a given retinal image. Existing methods mainly rely on the input image to generate descriptions. However, many abstract medical concepts or descriptions cannot be generated based on image information only. In this work, we integrate additional information to help solve this task; we observe that early in the diagnosis process, ophthalmologists have usually written down a small set of keywords denoting important information. These keywords are then subsequently used to aid the later creation of medical reports for a patient. Since these keywords commonly exist and are useful for generating medical reports, we incorporate them into automatic report generation. Since we have two types of inputs expert-defined unordered keywords and images - effectively fusing features from these different modalities is challenging. To that end, we propose a new keyword-driven medical report generation method based on a non-local attention-based multi-modal feature fusion approach, TransFuser, which is capable of fusing features from different types of inputs based on such attention. Our experiments show the proposed method successfully captures the mutual information of keywords and image content. We further show our proposed keyword-driven generation model reinforced by the TransFuser is superior to baselines under the popular text evaluation metrics BLEU, CIDEr, and ROUGE. Trans-Fuser Github: https://github.com/Jhhuangkay/Non-local-Attention-ImprovesDescription-Generation-for-Retinal-Images.
|
| Document type | Conference contribution |
| Language | English |
| Published at | https://doi.org/10.1109/WACV51458.2022.00331 |
| Published at | https://openaccess.thecvf.com/content/WACV2022/html/Huang_Non-Local_Attention_Improves_Description_Generation_for_Retinal_Images_WACV_2022_paper.html |
| Other links | https://www.proceedings.com/62669.html |
| Downloads |
Huang_Non-Local_Attention_Improves_Description_Generation_for_Retinal_Images_WACV_2022_paper
(Accepted author manuscript)
|
| Permalink to this page | |
