Research on automatic delineation of nasopharyngeal carcinoma target area based on generative adversarial network
Wang Fei1, Ren Caijun2, Zhou Jieping3, Tao Zhenchao3, Chen Huanhuan2, Qian Liting3
1The First Affiliated Hospital of the Ministry of Life Sciences and Medicine, University of Science and Technology of China, Hefei 230000, China; 2School of Computer Science and Technology, University of Science and Technology of China, Hefei 230000, China; 3Department of Radiotherapy, The First Affiliated Hospital of the Ministry of Life Sciences and Medicine, University of Science and Technology of China, Hefei 230000, China
Abstract:Objective To propose a deep learning network model 2D‐PE‐GAN to automatically delineate the target area of nasopharyngeal carcinoma and improve the efficiency of target area delineation. Methods The model adopted the architecture of generative adversarial networks which used a UNet similar structure as the generator, and 2D‐PE‐block was added after each layer of convolution operation of the generator to improve the accuracy of delineation. The experimental data included CT images from 130 cases of nasopharyngeal carcinoma. The images were preprocessed before model training. In addition, three models of UNet, GAN, and GAN with an attention mechanism were compared, and Dice similarity coefficient, Hausdorff distance, accuracy, Matthews correlation coefficient, Jaccard distance were employed to evaluate network performance. Results Compared with UNet, GAN and GAN with the attention mechanism, the average Dice similarity coefficient of 2D‐PE‐GAN network segmentation of CTV was increased by 26%, 4% and 2%. The average Dice similarity coefficient of GTV segmentation was increased by 21%, 4%, 2%, respectively. Compared with the GAN network with the attention mechanism, the parameters and time of 2D‐PE‐GAN were reduced by 0.16% and 18%, respectively. Conclusions Compared with the above three networks, 2D‐PE‐GAN network can increase the segmentation accuracy of nasopharyngeal carcinoma target area delineation. At the same time, compared with the attention mechanism with similar reasons, 2D‐PE‐GAN network can reduce the occupation of computing resources when the segmentation accuracy is not much different.
Wang Fei,Ren Caijun,Zhou Jieping et al. Research on automatic delineation of nasopharyngeal carcinoma target area based on generative adversarial network[J]. Chinese Journal of Radiation Oncology, 2022, 31(12): 1127-1132.
[1] Chang ET, Adami HO.The enigmatic epidemiology of nasopharyngeal carcinoma[J]. Cancer Epidemiol Biomarkers Prev, 2006,15(10):1765-1777. DOI: 10.1158/1055-9965.EPI-06-0353. [2] Vokes EE, Liebowitz DN, Weichselbaum RR. Nasopharyngeal carcinoma[J]. Lancet, 1997,350(9084):1087‐1091.DOI:10.1016/S0140‐6736(97)07269‐3. [3] Sanghani M, Mignano J.Intensity modulated radiation therapy: a review of current practice and future directions[J]. Technol Cancer Res Treat, 2006,5(5):447-450. DOI: 10.1177/1533 03460600500501. [4] Feng M, Abakay CD, Vineberg K, et al. Intra‐observer variability of organs at risk for head and neck cancer: geometric and dosimetric consequences[J]. Int J Radiat Oncol Biol Phys, 2010,78(3):S444‐S445. DOI: 10.1016/j.ijrobp.2010.07.1044. [5] Fountzilas G, Psyrri A, Giannoulatou E, et al.Prevalent somatic BRCA1 mutations shape clinically relevant genomic patterns of nasopharyngeal carcinoma in Southeast Europe[J]. Int J Cancer, 2018,142(1):66-80. DOI: 10.1002/ijc.31023. [6] Yamazaki H, Shiomi H, Tsubokura T, et al.Quantitative assessment of inter-observer variability in target volume delineation on stereotactic radiotherapy treatment for pituitary adenoma and meningioma near optic tract[J]. Radiat Oncol, 2011,6:10. DOI: 10.1186/1748-717X-6-10. [7] Huang K, Zhao Z, Gong Q, et al.Nasopharyngeal carcinoma segmentation via HMRF-EM with maximum entropy[J]. Annu Int Conf IEEE Eng Med Biol Soc, 2015,2015:2968-2972. DOI: 10.1109/EMBC.2015.7319015. [8] Shelhamer E, Long J, Darrell T.Fully Convolutional networks for semantic segmentation[J]. IEEE Trans Pattern Anal Mach Intell, 2017,39(4):640-651. DOI: 10.1109/TPAMI.2016. 2572683. [9] Ronneberger O, Fischer P, Brox T. U‐Net: convolutional networks for biomedical image segmentation[M]. Springer, 2015: 234‐241.DOI:org/10.48550/arXiv.1505.04597. [10] 潘沛克,王艳,罗勇,等. 基于U‐net模型的全自动鼻咽肿瘤MR图像分割[J]. 计算机应用,2019,39(4):1183‐1188. DOI: 10.11772/j.issn.1001‐9081.2018091908. Pan PK, Wang Y, Luo Y, et al. Automatic MR image segmentation of nasopharyngeal tumor based on U‐Net model[J]. J Comput appl, 2019,39(4):1183‐1188. DOI: 10.11772/j.issn.1001‐9081.2018091908. [11] Li ZJ, Wang YY, Yu JH. Brain tumor segmentation using an adversarial network[M]//Crimi A, Bakas S, Kuijf H, et al. Brainlesion: Glioma, Multiple Sclerosis, Stroke and Traumatic Brain Injuries. Cham: Springer, 2018:123‐132. [12] Goodfellow IJ, Pouget‐Abadie J, Mirza M, et al. Generative adversarial networks[J]. Adv Neural Inf Process Syst, 2014,3:2672‐2680. DOI:org/10.48550/arXiv.1406.2661. [13] Rickmann AM, Roy AG, Sarasua I, et al. Project & excite' modules for segmentation of volumetric medical scans[R]// Shen DG, Liu TM, Peters TM, et al. Medical Image Computing and Computer Assisted Intervention ‐ MICCAI2019. Cham: Springer, 2019: 39‐47. [14] Li YT, Chen Y, Shi Y.Brain tumor segmentation using 3D generative adversarial networks[J]. Int J Pattern Recognit Artif Intell, 2021,35(4):2157002. DOI: 10.1142/S0218001421 570020. [15] Zhang H, Goodfellow I, Metaxas D, et al. Self‐attention generative adversarial networks[EB/OL].(2019-06-14)[2021‐12‐01]. https://arxiv.org/pdf/1805.08318.pdf. [16] Daisne JF, Blumhofer A.Atlas-based automatic segmentation of head and neck organs at risk and nodal target volumes: a clinical validation[J]. Radiat Oncol, 2013,8:154. DOI: 10.1186/1748-717X-8-154. [17] Roels S, Duthoy W, Haustermans K, et al.Definition and delineation of the clinical target volume for rectal cancer[J]. Int J Radiat Oncol Biol Phys, 2006,65(4):1129-1142. DOI: 10.1016/j.ijrobp.2006.02.050. [18] Bahdanau D, Cho K, Bengio Y. Neural machine translation by jointly learning to align and translate[EB/OL].(2016-05-19)[2021‐12‐01]. https://arxiv.org/pdf/1409.0473v7.pdf.