-
LSAM: L2-norm self-attention and latent space feature interaction for automatic 3D multi-modal head and neck tumor segmentation.
Objective.Head and neck (H&N) cancers are prevalent globally, and early and accurate detection is absolutely crucial for timely and effective treatment. However, the segmentation of H&N tumors is challenging due to the similar density of the tumors and surrounding tissues in CT images. While positron emission computed tomography (PET) images provide information about the metabolic activity of the tissue and can distinguish between lesion regions and normal tissue. But they are limited by their low spatial resolution. To fully leverage the complementary information from PET and CT images, we propose a novel and innovative multi-modal tumor segmentation method specifically designed for H&N tumor segmentation.Approach.The proposed novel and innovative multi-modal tumor segmentation network (LSAM) consists of two key learning modules, namely L2-Norm self-attention and latent space feature interaction, which exploit the high sensitivity of PET images and the anatomical information of CT images. These two advanced modules contribute to a powerful 3D segmentation network based on a U-shaped structure. The well-designed segmentation method can integrate complementary features from different modalities at multiple scales, thereby improving the feature interaction between modalities.Main results.We evaluated the proposed method on the public HECKTOR PET-CT dataset, and the experimental results demonstrate that the proposed method convincingly outperforms existing H&N tumor segmentation methods in terms of key evaluation metrics, including DSC (0.8457), Jaccard (0.7756), RVD (0.0938), and HD95 (11.75).Significance.The innovative Self-Attention mechanism based on L2-Norm offers scalability and is effective in reducing the impact of outliers on the performance of the model. And the novel method for multi-scale feature interaction based on Latent Space utilizes the learning process in the encoder phase to achieve the best complementary effects among different modalities.
Li L
,Tan J
,Yu L
,Li C
,Nan H
,Zheng S
... -
《-》
-
SwinCross: Cross-modal Swin transformer for head-and-neck tumor segmentation in PET/CT images.
Radiotherapy (RT) combined with cetuximab is the standard treatment for patients with inoperable head and neck cancers. Segmentation of head and neck (H&N) tumors is a prerequisite for radiotherapy planning but a time-consuming process. In recent years, deep convolutional neural networks (DCNN) have become the de facto standard for automated image segmentation. However, due to the expensive computational cost associated with enlarging the field of view in DCNNs, their ability to model long-range dependency is still limited, and this can result in sub-optimal segmentation performance for objects with background context spanning over long distances. On the other hand, Transformer models have demonstrated excellent capabilities in capturing such long-range information in several semantic segmentation tasks performed on medical images.
Despite the impressive representation capacity of vision transformer models, current vision transformer-based segmentation models still suffer from inconsistent and incorrect dense predictions when fed with multi-modal input data. We suspect that the power of their self-attention mechanism may be limited in extracting the complementary information that exists in multi-modal data. To this end, we propose a novel segmentation model, debuted, Cross-modal Swin Transformer (SwinCross), with cross-modal attention (CMA) module to incorporate cross-modal feature extraction at multiple resolutions.
We propose a novel architecture for cross-modal 3D semantic segmentation with two main components: (1) a cross-modal 3D Swin Transformer for integrating information from multiple modalities (PET and CT), and (2) a cross-modal shifted window attention block for learning complementary information from the modalities. To evaluate the efficacy of our approach, we conducted experiments and ablation studies on the HECKTOR 2021 challenge dataset. We compared our method against nnU-Net (the backbone of the top-5 methods in HECKTOR 2021) and other state-of-the-art transformer-based models, including UNETR and Swin UNETR. The experiments employed a five-fold cross-validation setup using PET and CT images.
Empirical evidence demonstrates that our proposed method consistently outperforms the comparative techniques. This success can be attributed to the CMA module's capacity to enhance inter-modality feature representations between PET and CT during head-and-neck tumor segmentation. Notably, SwinCross consistently surpasses Swin UNETR across all five folds, showcasing its proficiency in learning multi-modal feature representations at varying resolutions through the cross-modal attention modules.
We introduced a cross-modal Swin Transformer for automating the delineation of head and neck tumors in PET and CT images. Our model incorporates a cross-modality attention module, enabling the exchange of features between modalities at multiple resolutions. The experimental results establish the superiority of our method in capturing improved inter-modality correlations between PET and CT for head-and-neck tumor segmentation. Furthermore, the proposed methodology holds applicability to other semantic segmentation tasks involving different imaging modalities like SPECT/CT or PET/MRI. Code:https://github.com/yli192/SwinCross_CrossModalSwinTransformer_for_Medical_Image_Segmentation.
Li GY
,Chen J
,Jang SI
,Gong K
,Li Q
... -
《-》
-
Multi-modal segmentation with missing image data for automatic delineation of gross tumor volumes in head and neck cancers.
Head and neck (HN) gross tumor volume (GTV) auto-segmentation is challenging due to the morphological complexity and low image contrast of targets. Multi-modality images, including computed tomography (CT) and positron emission tomography (PET), are used in the routine clinic to assist radiation oncologists for accurate GTV delineation. However, the availability of PET imaging may not always be guaranteed.
To develop a deep learning segmentation framework for automated GTV delineation of HN cancers using a combination of PET/CT images, while addressing the challenge of missing PET data.
Two datasets were included for this study: Dataset I: 524 (training) and 359 (testing) oropharyngeal cancer patients from different institutions with their PET/CT pairs provided by the HECKTOR Challenge; Dataset II: 90 HN patients(testing) from a local institution with their planning CT, PET/CT pairs. To handle potentially missing PET images, a model training strategy named the "Blank Channel" method was implemented. To simulate the absence of a PET image, a blank array with the same dimensions as the CT image was generated to meet the dual-channel input requirement of the deep learning model. During the model training process, the model was randomly presented with either a real PET/CT pair or a blank/CT pair. This allowed the model to learn the relationship between the CT image and the corresponding GTV delineation based on available modalities. As a result, our model had the ability to handle flexible inputs during prediction, making it suitable for cases where PET images are missing. To evaluate the performance of our proposed model, we trained it using training patients from Dataset I and tested it with Dataset II. We compared our model (Model 1) with two other models which were trained for specific modality segmentations: Model 2 trained with only CT images, and Model 3 trained with real PET/CT pairs. The performance of the models was evaluated using quantitative metrics, including Dice similarity coefficient (DSC), mean surface distance (MSD), and 95% Hausdorff Distance (HD95). In addition, we evaluated our Model 1 and Model 3 using the 359 test cases in Dataset I.
Our proposed model(Model 1) achieved promising results for GTV auto-segmentation using PET/CT images, with the flexibility of missing PET images. Specifically, when assessed with only CT images in Dataset II, Model 1 achieved DSC of 0.56 ± 0.16, MSD of 3.4 ± 2.1 mm, and HD95 of 13.9 ± 7.6 mm. When the PET images were included, the performance of our model was improved to DSC of 0.62 ± 0.14, MSD of 2.8 ± 1.7 mm, and HD95 of 10.5 ± 6.5 mm. These results are comparable to those achieved by Model 2 and Model 3, illustrating Model 1's effectiveness in utilizing flexible input modalities. Further analysis using the test dataset from Dataset I showed that Model 1 achieved an average DSC of 0.77, surpassing the overall average DSC of 0.72 among all participants in the HECKTOR Challenge.
We successfully refined a multi-modal segmentation tool for accurate GTV delineation for HN cancer. Our method addressed the issue of missing PET images by allowing flexible data input, thereby providing a practical solution for clinical settings where access to PET imaging may be limited.
Zhao Y
,Wang X
,Phan J
,Chen X
,Lee A
,Yu C
,Huang K
,Court LE
,Pan T
,Wang H
,Wahid KA
,Mohamed ASR
,Naser M
,Fuller CD
,Yang J
... -
《-》
-
DMCT-Net: dual modules convolution transformer network for head and neck tumor segmentation in PET/CT.
Objective.Accurate segmentation of head and neck (H&N) tumors is critical in radiotherapy. However, the existing methods lack effective strategies to integrate local and global information, strong semantic information and context information, and spatial and channel features, which are effective clues to improve the accuracy of tumor segmentation. In this paper, we propose a novel method called dual modules convolution transformer network (DMCT-Net) for H&N tumor segmentation in the fluorodeoxyglucose positron emission tomography/computed tomography (FDG-PET/CT) images.Approach.The DMCT-Net consists of the convolution transformer block (CTB), the squeeze and excitation (SE) pool module, and the multi-attention fusion (MAF) module. First, the CTB is designed to capture the remote dependency and local multi-scale receptive field information by using the standard convolution, the dilated convolution, and the transformer operation. Second, to extract feature information from different angles, we construct the SE pool module, which not only extracts strong semantic features and context features simultaneously but also uses the SE normalization to adaptively fuse features and adjust feature distribution. Third, the MAF module is proposed to combine the global context information, channel information, and voxel-wise local spatial information. Besides, we adopt the up-sampling auxiliary paths to supplement the multi-scale information.Main results.The experimental results show that the method has better or more competitive segmentation performance than several advanced methods on three datasets. The best segmentation metric scores are as follows: DSC of 0.781, HD95 of 3.044, precision of 0.798, and sensitivity of 0.857. Comparative experiments based on bimodal and single modal indicate that bimodal input provides more sufficient and effective information for improving tumor segmentation performance. Ablation experiments verify the effectiveness and significance of each module.Significance.We propose a new network for 3D H&N tumor segmentation in FDG-PET/CT images, which achieves high accuracy.
Wang J
,Peng Y
,Guo Y
《-》
-
MFCNet: A multi-modal fusion and calibration networks for 3D pancreas tumor segmentation on PET-CT images.
In clinical diagnosis, positron emission tomography and computed tomography (PET-CT) images containing complementary information are fused. Tumor segmentation based on multi-modal PET-CT images is an important part of clinical diagnosis and treatment. However, the existing current PET-CT tumor segmentation methods mainly focus on positron emission tomography (PET) and computed tomography (CT) feature fusion, which weakens the specificity of the modality. In addition, the information interaction between different modal images is usually completed by simple addition or concatenation operations, but this has the disadvantage of introducing irrelevant information during the multi-modal semantic feature fusion, so effective features cannot be highlighted. To overcome this problem, this paper propose a novel Multi-modal Fusion and Calibration Networks (MFCNet) for tumor segmentation based on three-dimensional PET-CT images. First, a Multi-modal Fusion Down-sampling Block (MFDB) with a residual structure is developed. The proposed MFDB can fuse complementary features of multi-modal images while retaining the unique features of different modal images. Second, a Multi-modal Mutual Calibration Block (MMCB) based on the inception structure is designed. The MMCB can guide the network to focus on a tumor region by combining different branch decoding features using the attention mechanism and extracting multi-scale pathological features using a convolution kernel of different sizes. The proposed MFCNet is verified on both the public dataset (Head and Neck cancer) and the in-house dataset (pancreas cancer). The experimental results indicate that on the public and in-house datasets, the average Dice values of the proposed multi-modal segmentation network are 74.14% and 76.20%, while the average Hausdorff distances are 6.41 and 6.84, respectively. In addition, the experimental results show that the proposed MFCNet outperforms the state-of-the-art methods on the two datasets.
Wang F
,Cheng C
,Cao W
,Wu Z
,Wang H
,Wei W
,Yan Z
,Liu Z
... -
《-》