[an error occurred while processing this directive] | [an error occurred while processing this directive]
Application of a multi-task learning-based light-weight convolution neural network for the automatic segmentation of organs at risk in thorax
Zhang Jie1, Yang Yiwei1, Shao Kainan1, Bai Xue1, Fang Min2, Shan Guoping1, Chen Ming2
1Department of Radiation Physics, The Cancer Hospital of the University of Chinese Academy of Sciences (Zhejiang Cancer Hospital), Hangzhou 310022,China; 2Department of Radiation Oncology, The Cancer Hospital of the University of Chinese Academy of Sciences (Zhejiang Cancer Hospital), Hangzhou 310022,China
AbstractObjective To evaluate the application of a multi-task learning-based light-weight convolution neural network (MTLW-CNN) for the automatic segmentation of organs at risk (OARs) in thorax. Methods MTLW-CNN consisted of several layers for sharing features and 3 branches for segmenting 3 OARs. 497 cases with thoracic tumors were collected. Among them, the computed tomography (CT) images encompassing lung, heart and spinal cord were included in this study. The corresponding contours delineated by experienced radiation oncologists were ground truth. All cases were randomly categorized into the training and validation set (n=300) and test set (n=197). By applying MTLW-CNN on the test set, the Dice similarity coefficients (DSCs) of 3 OARs,training and testing time and space complexity (S)were calculated and compared with those of Unet and DeepLabv3+. To evaluate the effect of multi-task learning on the generalization performance of the model, 3 single-task light-weight CNNs (STLW-CNNs) were built. Their structures were totally the same as the corresponding branches in MTLW-CNN. After using the same data and algorithm to train STLW-CNN, the DSCs were statistically compared with MTLW-CNN on the testing set. Results For MTLW-CNN, the averages (μ) of lung, heart and spinal cord DSCs were 0.954, 0.921 and 0.904, respectively. The differences of μ between MTLW-CNN and other two models (Unet and DeepLabv3+) were less than 0.020. The training and testing time of MTLW-CNN were 1/3 to 1/30 of that of Unet and DeepLabv3+. S of MTLW-CNN was 1/42 of that of Unet and 1/1220 of that of DeepLabv3+. The differences of μ and standard deviation (σ) of lung and heart between MTLW-CNN and STLW-CNN were approximately 0.005 and 0.002. The difference of μ of spinal cord was 0.001, but σ of STLW-CNN was 0.014higher than that of MTLW-CNN. Conclusions MTLW-CNN spends less time and space on high-precision automatic segmentation of thoracic OARs. It can improve the application efficiency and generalization performance of the models.
Fund:Zhejiang Key Research & Development Project (2019C03003);Natural Science Foundation of Zhejiang Province (LQ20H180016);Zhejiang Medical and Health Project (2019RC023,2019ZH018)
Zhang Jie,Yang Yiwei,Shao Kainan et al. Application of a multi-task learning-based light-weight convolution neural network for the automatic segmentation of organs at risk in thorax[J]. Chinese Journal of Radiation Oncology, 2021, 30(9): 917-923.
Zhang Jie,Yang Yiwei,Shao Kainan et al. Application of a multi-task learning-based light-weight convolution neural network for the automatic segmentation of organs at risk in thorax[J]. Chinese Journal of Radiation Oncology, 2021, 30(9): 917-923.
[1] Han M, Ma J, Yan L, et al.Segmentation of organs at risk in CT volumes of head, thorax, abdomen, and pelvis[C]. Florida:SPIE Medical Imaging, 2015.DOI:10.1117/12.2081853. [2] Whitfield GA, Price P, Price GJ, et al.Automated delineation of radiotherapy volumes:are we going in the right direction?[J]. Br J Radiol, 2013, 86(1021):20110718.DOI:10.1259/bjr.20110718. [3] Tomoshige S, Oost E, Shimizu A, et al.A conditional statistical shape model with integrated error estimation of the conditions;application to liver segmentation in non-contrast CT images[J]. Med Image Anal, 2014, 18(1):130-143.DOI:10.1016/j.media.2013.10.003. [4] Shahedi M, Halicek M, Guo R, et al.A semiautomatic segmentation method for prostate in CT images using local texture classification and statistical shape modeling[J]. Med Phys, 2018, 45(6):2527-2541.DOI:10.1002/mp.12898. [5] Daisne JF, Blumhofer A.Atlas-based automatic segmentation of head and neck organs at risk and nodal target volumes:a clinical validation[J]. Radiat Oncol, 2013, 8:154.DOI:10.1186/1748-717X-8-154. [6] Schipaanboord B, Boukerroui D, Peressutti D, et al.An evaluation of atlas selection methods for atlas-based automatic segmentation in radiotherapy treatment planning[J]. IEEE Trans Med Imaging, 2019, 38(11):2654-2664.DOI:10.1109/TMI.2019.2907072. [7] Kearney V, Chen S, Gu X, et al.Automated landmark-guided deformable image registration[J]. Phys Med Biol, 2015, 60(1):101-116.DOI:10.1088/0031-9155/60/1/101. [8] Kearney V, Huang Y, Mao W, et al.Canny edge-based deformable image registration[J]. Phys Med Biol, 2017, 62(3):966-985.DOI:10.1088/1361-6560/aa5342. [9] 邓金城,彭应林,刘常春,等.深度卷积神经网络在放射治疗计划图像分割中的应用[J]. 中国医学物理学杂志,2018,35(6):621-627.DOI:10.3969/j.issn.1005-202X.2018.06.001. Deng JC, Peng YL, Liu CC, et al.Application of deep convolution neural network in radiotherapy planning image segmentation[J]. Chin J Med Phys, 2018, 35(6):621-627.DOI:10.3969/j.issn.1005-202X.2018.06.001. [10] 门阔,戴建荣.利用深度反卷积神经网络自动勾画放疗危及器官[J]. 中国医学物理学杂志,2018,35(3):256-259.DOI:10.3969/j.issn.1005-202X.2018.03.002. Men K, Dai JR, Automatic segmentation of organs at risk in radiotherapy using deep deconvolution neural network[J]. Chin J Med Phys, 2018, 35(3):256-259.DOI:10.3969/j.issn.1005-202X.2018.03.002. [11] van Harten L, Noothout JM, Verhoeff J,et al.Automatic segmentation of organs at risk in thoracic CT scans by combining 2D and 3D convolutional neural networks[C]. Venice:ISBI 2019 segmentation of THoracic Organs at Risk in CT images (SegTHOR) challenge,2019. [12] Chan JW, Kearney V, Haaf S, et al.A convolutional neural network algorithm for automatic segmentation of head and neck organs at risk using deep lifelong learning[J]. Med Phys, 2019, 46(5):2204-2213.DOI:10.1002/mp.13495. [13] Dong X, Lei Y, Wang T, et al.Automatic multiorgan segmentation in thorax CT images using U-net-GAN[J]. Med Phys, 2019, 46(5):2157-2168.DOI:10.1002/mp.13458. [14] Feng X, Qing K, Tustison NJ, et al.Deep convolutional neural network for segmentation of thoracic organs-at-risk using cropped 3D images[J]. Med Phys, 2019, 46(5):2169-2180.DOI:10.1002/mp.13466. [15] Zhong T, Huang X, Tang F, et al.Boosting-based cascaded convolutional neural networks for the segmentation of CT organs-at-risk in nasopharyngeal carcinoma[J]. Med Phys, 2019,46(12):5602-5611.DOI:10.1002/mp.13825. [16] He K, Sun J.Convolutional neural networks at constrained time cost[C]. Boston:2015 IEEE Conference on Computer Vision and Pattern Recognition, 2015.DOI:10.1109/cvpr.2015.7299173. [17] Wollmann T, Gunkel M, Chung I, et al.GRUU-net:integrated convolutional and gated recurrent neural network for cell segmentation[J]. Med Image Analys, 2019, 56:68-79.DOI:10.1016/j.media.2019.04.011.