[an error occurred while processing this directive] | [an error occurred while processing this directive]
Different receptive fields-based automatic segmentation network for gross target volume and organs at risk of patients with nasopharyngeal carcinoma
Liu Yuliang1, Li Yongbao2, Qi Mengke1, Wu Aiqian1, Lu Xingyu1, Song Ting1, Zhou Linghong1
1School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, China; 2Department of Radiation Oncology, Sun Yat-sen University Cancer Center, Guangzhou 510060, China
AbstractObjective To establish an automatic segmentation network based on different receptive fields for gross target volume (GTV) and organs at risk in patients with nasopharyngeal carcinoma. Methods Radiotherapy data of 100 cases of nasopharyngeal carcinoma including CT images and GTV and organs at risk delineated by the physicians were collected. Ninety plans were randomly selected as the training dataset, and the other 10 plans as the validation dataset. Firstly, the images were subject to three data augmentation methods including center cropping, vertical flipping and rotation (-30°to 30°), and then input into MA_net networks proposed in this study for training. The model performance of networks was assessed by the number of network parameters (NP), floating-point number (FPN), the running memory (RM) and Dice index (DI), and eventually compared with DeeplabV3+, PSP_net, UNet++ and U_Net networks. Results When the input image was in the size of 240×240, MA_net had a NP of 23.20%, 20.10%, 25.55% and 27.11% of these 4 networks, 50.02%, 19.86%, 6.37% and 13.44% for the FPN, 40.63%, 23.60%, 11.58% and 14.99% for the RM, respectively. For the DI of GTV, MA_net was 1.16%, 2.28%, 1.27% and 3.59% higher than these 4 networks. For the average DI of GTV and OAR, MA_net was 0.16%, 1.37%, 0.30% and 0.97% higher than these 4 networks. Conclusion Compared with those four networks, the proposed MA_net network has slightly higher Dice index with fewer parameters, lower FPN and smaller RM.
Liu Yuliang,Li Yongbao,Qi Mengke et al. Different receptive fields-based automatic segmentation network for gross target volume and organs at risk of patients with nasopharyngeal carcinoma[J]. Chinese Journal of Radiation Oncology, 2021, 30(5): 468-474.
Liu Yuliang,Li Yongbao,Qi Mengke et al. Different receptive fields-based automatic segmentation network for gross target volume and organs at risk of patients with nasopharyngeal carcinoma[J]. Chinese Journal of Radiation Oncology, 2021, 30(5): 468-474.
[1] 潘建基,张瑜,林少俊,等. 1706例鼻咽癌放疗远期疗效分析[J]. 中华放射肿瘤学杂志, 2008, 17(4):247-251. DOI:10.3321/j.issn:1004-4221.2008.04.001. Pan JJ, Zhang Y, Lin SJ, et al. Long-term results of nasopharyngeal carcinoma treated with radiotherapy:1706 cases report[J]. Chin J Radiat Oncol, 2008, 17(4):247-251. DOI:10.3321/j.issn:1004-4221.2008.04.001. [2] 彭应林,孙文钊,程皖琴,等. 鼻咽癌调强计划靶区和危及器官不同勾画方式下个体化差异观察[J]. 中华放射肿瘤学杂志, 2019, 28(10):762-766. DOI:10.3760/cma.j.issn.1004-4221.2019.10.010. Peng YL, Sun WZ, Cheng WQ, et al. Interobserver variations in the delineation of planning target volume and with organs at risk different contouring methods in intensity-modulated radiation therapy for nasopharyngeal carcinoma[J]. Chin J Radiat Oncol, 2019, 28(10):762-766. DOI: 10.3760/cma.j.issn.1004-4221.2019.10.010. [3] 李鲜,王艳,罗勇,等. 基于随机森林特征选择算法的鼻咽肿瘤分割[J]. 计算机应用,2019, 39(5):1485-1489. DOI:10.11772/j.issn.1001-9081.2018102205. Li X, Wang Y, Lou Y, et al. Segmentation of nasopharyngeal neoplasms based on random forest feature selection algorithm[J]. J Comput Appl, 2019, 39(5):1485-1489. DOI:10.11772/j.issn.1001-9081.2018102205. [4] Chanapai W, Ritthipravat P. Adaptive thresholding based on SOM technique for semi-automatic NPC image segmentation[C]//2009 International Conference on Machine Learning and Applications. IEEE, 2009:504-508. DOI:10.1109/ICmlA.2009.135. [5] Tatanun C, Ritthipravat P, Bhongmakapat T, et al. Automatic segmentation of nasopharyngeal carcinoma from CT images[C]//International Conference on Signal Processing Systems. IEEE, 2010. DOI:10.1109/ICSPS.2010.5555663. [6] Zhang Y, Matuszewski BJ, Shark LK, et al. Medical image segmentation using new hybrid level-set method[C]//2008 Fifth International Conference Biomedical Visualization:Information Visualization in Medical and Biomedical Informatics. IEEE, 2008:71-76. DOI:10.1109/MediVis.2008.12. [7] Huang KW, Zhao ZY, Gong Q, et al. Nasopharyngeal carcinoma segmentation via HMRF-EM with maximum entropy[J]. Annu Int Conf IEEE Eng Med Biol Soc, 2015, 2015:2968-2972. DOI:10.1109/EMBC.2015.7319015. [8] Ronneberger O, Fischer P, Brox T. U-net:convolutional networks for biomedical image segmentation[C]//International Conference on Medical Image Computing and Computer-Assisted Intervention. Springer, 2015:234-241. DOI:10.1007/978-3-319-24574-4_28. [9] 潘沛克,王艳,罗勇,等. 基于U-net模型的全自动鼻咽肿瘤MR图像分割[J]. 计算机应用, 2019, 039(004):1183-1188. DOI:10.11772/j.issn.1001-9081.2018091908. Pan PK, Wang Y, Luo Y, et al. Automatic segmentation of nasopharyngeal neoplasm in MR images based on U-net model[J]. J Comput Appl, 2019, 039(004):1183-1188. DOI:10.11772/j.issn.1001-9081.2018091908. [10] Zhou Z, Siddiquee MR, Tajbakhsh N, et al. Unet++:redesigning skip connections to exploit multiscale features in image segmentation[J]. IEEE Trans Med Imaging, 2020, 39(6):1856-1867. DOI:10.1109/TMI.2019.2959609. [11] Abulnaga SM, Rubin J. Ischemic stroke lesion segmentation in CT perfusion scans using pyramid pooling and focal loss[C]//International MICCAI Brainlesion Workshop. Springer, Cham, 2018:352-363. DOI:10.1007/978-3-030-11723-8_36. [12] Zhao H, Shi J, Qi X, et al. Pyramid scene parsing network[C]//Proceedings of the IEEE Conference On Computer Vision and Pattern Recognition. 2017:2881-2890. DOI:10.1109/CVPR.2017.660. [13] Choudhury AR, Vanguri R, Jambawalikar SR, et al. Segmentation of brain tumors using DeepLabv3+[C]//International MICCAI Brainlesion Workshop. Springer, 2018:154-167. DOI:10.1007/978-3-030-11726-9_14. [14] Chen LC, Papandreou G, Schroff F, et al. Rethinking atrous convolution for semantic image segmentation[DB/OL][2017-6-17]. https://arxiv.org/abs/1706.05587. [15] Liu Y, Yu J, Han Y. Understanding the effective receptive field in semantic image segmentation[J]. Multimed Tools Appl, 2018, 77:22159-22171. [16] Gu Y, Zhong Z, Wu S, et al. Enlarging effective receptive field of convolutional neural networks for better semantic segmentation[C]//2017 4th IAPR Asian Conference on Pattern Recognition (ACPR). IEEE, 2017:388-393. DOI:10.1109/ACPR.2017.7. [17] Araujo A, Norris W, Sim J. Computing receptive fields of convolutional neural networks[DB/OL][2019-11-04]. https://distill.pub/2019/computing-receptive-fields/. DOI:10.23915/distill.00021. [18] Çiçekö, Abdulkadir A, Lienkamp SS, et al. 3D U-Net:learning dense volumetric segmentation from sparse annotation[C]//International Conference On Medical Image Computing and Computer-Assisted Intervention. Springer, 2016:424-432. DOI:10.1007/978-3-319-46723-8_49. [19] Abulnaga SM, Rubin J. Ischemic stroke lesion segmentation in CT perfusion scans using pyramid pooling and focal loss[C]//International MICCAI Brainlesion Workshop. Springer, 2018:352-363. 2018. DOI:10.1007/978-3-030-11723-8_36. [20] 郑冬, 李向群, 许新征. 基于轻量化SSD的车辆及行人检测网络[J]. 南京师大学报(自然科学版), 2019, 42(01):73-81. Zheng D, Li XQ, Xu XZ. Vehicle and pedestrian detection model based on lightweight SSD[J]. J Nanjing Normal Univ (Nat Sci Ed), 2019, 42(01):73-81. [21] Yu F, Koltun V. Multi-scale context aggregation by dilated convolutions[DB/OL][2015-11-23]. https://arxiv.org/abs/1511.07122. [22] He K, Zhang X, Ren S, et al. Deep residual learning for image recognition[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2016:770-778. DOI:10.1109/CVPR.2016.90. [23] Daoud B, Morooka K, Kurazume R, et al. 3D segmentation of nasopharyngeal carcinoma from CT imagesusing cascade deep learning[J]. Comput Med Imaging Graph, 2019, 77:101644. DOI:10.1016/j.compmedimag.2019.101644. [24] Zhao L, Lu Z, Jiang J, et al. Automatic nasopharyngeal carcinoma segmentation using fully convolutional networks with auxiliary paths on dual-modality PET-CT images[J]. J Digit Imaging, 2019, 32(3):462-470. DOI:10.1007/s10278-018-00173-0.