<div class="csl-bib-body">
<div class="csl-entry">Zhang, J., Zhang, S., Shen, X., Lukasiewicz, T., & Xu, Z. (2024). Multi-ConDoS: Multimodal Contrastive Domain Sharing Generative Adversarial Networks for Self-Supervised Medical Image Segmentation. <i>IEEE Transactions on Medical Imaging</i>, <i>43</i>(1), 76–95. https://doi.org/10.1109/TMI.2023.3290356</div>
</div>
-
dc.identifier.issn
0278-0062
-
dc.identifier.uri
http://hdl.handle.net/20.500.12708/191938
-
dc.description.abstract
Existing self-supervised medical image segmentation usually encounters the domain shift problem (i.e., the input distribution of pre-training is different from that of fine-tuning) and/or the multimodality problem (i.e., it is based on single-modal data only and cannot utilize the fruitful multimodal information of medical images). To solve these problems, in this work, we propose multimodal contrastive domain sharing (Multi-ConDoS) generative adversarial networks to achieve effective multimodal contrastive self-supervised medical image segmentation. Compared to the existing self-supervised approaches, Multi-ConDoS has the following three advantages: (i) it utilizes multimodal medical images to learn more comprehensive object features via multimodal contrastive learning; (ii) domain translation is achieved by integrating the cyclic learning strategy of CycleGAN and the cross-domain translation loss of Pix2Pix; (iii) novel domain sharing layers are introduced to learn not only domain-specific but also domain-sharing information from the multimodal medical images. Extensive experiments on two publicly multimodal medical image segmentation datasets show that, with only 5% (resp., 10%) of labeled data, Multi-ConDoS not only greatly outperforms the state-of-the-art self-supervised and semi-supervised medical image segmentation baselines with the same ratio of labeled data, but also achieves similar (sometimes even better) performances as fully supervised segmentation methods with 50% (resp., 100%) of labeled data, which thus proves that our work can achieve superior segmentation performances with very low labeling workload. Furthermore, ablation studies prove that the above three improvements are all effective and essential for Multi-ConDoS to achieve this very superior performance.
en
dc.language.iso
en
-
dc.publisher
IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
-
dc.relation.ispartof
IEEE Transactions on Medical Imaging
-
dc.subject
Self-supervised learning
en
dc.subject
multi-modal medical image segmentation
en
dc.subject
contrastive learning
en
dc.subject
domain translation
en
dc.subject
domain sharing
en
dc.title
Multi-ConDoS: Multimodal Contrastive Domain Sharing Generative Adversarial Networks for Self-Supervised Medical Image Segmentation
en
dc.type
Article
en
dc.type
Artikel
de
dc.contributor.affiliation
Hebei University of Technology, China
-
dc.contributor.affiliation
Hebei University of Technology, China
-
dc.contributor.affiliation
King Abdullah University of Science and Technology, Saudi Arabia