Xu, Z., Liu, Y., Xu, G., & Lukasiewicz, T. (2025). Self-Supervised Medical Image Segmentation Using Deep Reinforced Adaptive Masking. IEEE Transactions on Medical Imaging, 44(1), 180–193. https://doi.org/10.1109/TMI.2024.3436608
E192-07 - Forschungsbereich Artificial Intelligence Techniques E192-03 - Forschungsbereich Knowledge Based Systems
-
Journal:
IEEE Transactions on Medical Imaging
-
ISSN:
0278-0062
-
Date (published):
Jan-2025
-
Number of Pages:
14
-
Publisher:
IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
-
Peer reviewed:
Yes
-
Keywords:
medical image segmentation; adaptive hard masking; masked image modeling
en
Abstract:
Self-supervised learning aims to learn transferable representations from unlabeled data for downstream tasks. Inspired by masked language modeling in natural language processing, masked image modeling (MIM) has achieved certain success in the field of computer vision, but its effectiveness in medical images remains unsatisfactory. This is mainly due to the high redundancy and small discriminative regions in medical images compared to natural images. Therefore, this paper proposes an adaptive hard masking (AHM) approach based on deep reinforcement learning to expand the application of MIM in medical images. Unlike predefined random masks, AHM uses an asynchronous advantage actor-critic (A3C) model to predict reconstruction loss for each patch, enabling the model to learn where masking is valuable. By optimizing the non-differentiable sampling process using reinforcement learning, AHM enhances the understanding of key regions, thereby improving downstream task performance. Experimental results on two medical image datasets demonstrate that AHM outperforms state-of-the-art methods. Additional experiments under various settings validate the effectiveness of AHM in constructing masked images.