SG-MuRCL: Smoothed Graph-Enhanced Multi-Instance Contrastive Learning for Robust Whole-Slide Image Classification

Open Access
Authors
Publication date 01-2026
Journal Information (Switzerland)
Article number 37
Volume | Issue number 17 | 1
Number of pages 22
Organisations
  • Faculty of Science (FNWI) - Informatics Institute (IVI)
Abstract
Multiple-Instance Learning (MIL) is a standard paradigm for classifying gigapixel Whole-Slide Images (WSIs). However, prominent models such as Attention-Based MIL (ABMIL) treat image patches as independent instances, ignoring their inherent spatial context. More advanced frameworks like MuRCL employ reinforcement learning for instance selection but do not explicitly enforce spatial coherence, often resulting in noisy localizations. Although Graph Neural Networks (GNNs), attention smoothing, and reinforcement learning (RL) are each powerful, state-of-the-art strategies for addressing these issues individually, their integration remains a significant challenge. This paper introduces SG-MuRCL, a framework that enhances MuRCL by first employing a GNN to explicitly model spatial relationships, departing from ABMIL’s independence assumption and, second, incorporating an attention-smoothing operator to regularize the MIL aggregator, aiming to improve robustness by generating more coherent and clinically meaningful heatmaps. Empirical evaluation yielded an important finding: while the baseline MuRCL trained successfully, the integrated SG-MuRCL consistently collapsed into a trivial solution. This outcome shows that the theoretical synergy between GNNs, attention smoothing, and RL does not trivially translate into practice. The contribution of this work is therefore not a high-performing model, but a concrete demonstration of scalability and stability challenges that arise when unifying these advanced paradigms.
Document type Article
Language English
Published at https://doi.org/10.3390/info17010037
Downloads
SG-MuRCL (Final published version)
Permalink to this page
Back