Reducing annotation burden in MR: A novel MR-contrast guided contrastive learning approach for image segmentation.

来自 PUBMED

作者:

Umapathy LBrown TMushtaq RGreenhill MLu JMartin DAltbach MBilgin A

展开

摘要:

Contrastive learning, a successful form of representational learning, has shown promising results in pretraining deep learning (DL) models for downstream tasks. When working with limited annotation data, as in medical image segmentation tasks, learning domain-specific local representations can further improve the performance of DL models. In this work, we extend the contrastive learning framework to utilize domain-specific contrast information from unlabeled Magnetic Resonance (MR) images to improve the performance of downstream MR image segmentation tasks in the presence of limited labeled data. The contrast in MR images is controlled by underlying tissue properties (e.g., T1 or T2) and image acquisition parameters. We hypothesize that learning to discriminate local representations based on underlying tissue properties should improve subsequent segmentation tasks on MR images. We propose a novel constrained contrastive learning (CCL) strategy that uses tissue-specific information via a constraint map to define positive and negative local neighborhoods for contrastive learning, embedding this information in the representational space during pretraining. For a given MR contrast image, the proposed strategy uses local signal characteristics (constraint map) across a set of related multi-contrast MR images as a surrogate for underlying tissue information. We demonstrate the utility of the approach for downstream: (1) multi-organ segmentation tasks in T2-weighted images where a DL model learns T2 information with constraint maps from a set of 2D multi-echo T2-weighted images (n = 101) and (2) tumor segmentation tasks in multi-parametric images from the public brain tumor segmentation (BraTS) (n = 80) dataset where DL models learn T1 and T2 information from multi-parametric BraTS images. Performance is evaluated on downstream multi-label segmentation tasks with limited data in (1) T2-weighted images of the abdomen from an in-house Radial-T2 (Train/Test = 30/20), (2) public Cartesian-T2 (Train/Test = 6/12) dataset, and (3) multi-parametric MR images from the public brain tumor segmentation dataset (BraTS) (Train/Test = 40/50). The performance of the proposed CCL strategy is compared to state-of-the-art self-supervised contrastive learning techniques. In each task, a model is also trained using all available labeled data for supervised baseline performance. The proposed CCL strategy consistently yielded improved Dice scores, Precision, and Recall metrics, and reduced HD95 values across all segmentation tasks. We also observed performance comparable to the baseline with reduced annotation effort. The t-SNE visualization of features for T2-weighted images demonstrates its ability to embed T2 information in the representational space. On the BraTS dataset, we also observed that using an appropriate multi-contrast space to learn T1+T2, T1, or T2 information during pretraining further improved the performance of tumor segmentation tasks. Learning to embed tissue-specific information that controls MR image contrast with the proposed constrained contrastive learning improved the performance of DL models on subsequent segmentation tasks compared to conventional self-supervised contrastive learning techniques. The use of such domain-specific local representations could help understand, improve performance, and mitigate the scarcity of labeled data in MR image segmentation tasks.

收起

展开

DOI:

10.1002/mp.16820

被引量:

0

年份:

1970

SCI-Hub (全网免费下载) 发表链接

通过 文献互助 平台发起求助,成功后即可免费获取论文全文。

查看求助

求助方法1:

知识发现用户

每天可免费求助50篇

求助

求助方法1:

关注微信公众号

每天可免费求助2篇

求助方法2:

求助需要支付5个财富值

您现在财富值不足

您可以通过 应助全文 获取财富值

求助方法2:

完成求助需要支付5财富值

您目前有 1000 财富值

求助

我们已与文献出版商建立了直接购买合作。

你可以通过身份认证进行实名认证,认证成功后本次下载的费用将由您所在的图书馆支付

您可以直接购买此文献,1~5分钟即可下载全文,部分资源由于网络原因可能需要更长时间,请您耐心等待哦~

身份认证 全文购买

相似文献(944)

参考文献(15)

引证文献(0)

来源期刊

-

影响因子:暂无数据

JCR分区: 暂无

中科院分区:暂无

研究点推荐

关于我们

zlive学术集成海量学术资源,融合人工智能、深度学习、大数据分析等技术,为科研工作者提供全面快捷的学术服务。在这里我们不忘初心,砥砺前行。

友情链接

联系我们

合作与服务

©2024 zlive学术声明使用前必读