US 12,230,019 B2
Decoupling divide-and-conquer facial nerve segmentation method and device
Jing Wang, Hangzhou (CN); Bo Dong, Hangzhou (CN); Hongjian He, Hangzhou (CN); and Xiujun Cai, Hangzhou (CN)
Assigned to ZHEJIANG UNIVERSITY, Hangzhou (CN)
Appl. No. 17/802,953
Filed by ZHEJIANG UNIVERSITY, Hangzhou (CN)
PCT Filed Feb. 28, 2022, PCT No. PCT/CN2022/076927
§ 371(c)(1), (2) Date Aug. 28, 2022,
PCT Pub. No. WO2023/045231, PCT Pub. Date Mar. 30, 2023.
Claims priority of application No. 202111106992 (CN), filed on Sep. 22, 2021.
Prior Publication US 2024/0203108 A1, Jun. 20, 2024
Int. Cl. G06V 10/80 (2022.01); A61B 34/10 (2016.01); G06V 10/26 (2022.01); G06V 10/74 (2022.01); G06V 10/77 (2022.01); G06V 10/776 (2022.01); G06V 20/70 (2022.01)
CPC G06V 10/806 (2022.01) [A61B 34/10 (2016.02); G06V 10/26 (2022.01); G06V 10/761 (2022.01); G06V 10/7715 (2022.01); G06V 10/776 (2022.01); G06V 20/70 (2022.01); A61B 2034/107 (2016.02); G06V 2201/03 (2022.01)] 10 Claims
OG exemplary drawing
 
1. A decoupling divide-and-conquer facial nerve segmentation method, comprising the following steps:
obtaining and pre-processing a computed tomography (CT) image to obtain a sample set;
constructing a facial nerve segmentation model comprising a feature extraction module, a rough segmentation module, and a fine segmentation module, wherein the feature extraction module is configured to extract a feature from an inputted CT image sample, to obtain one low-level feature map and a plurality of different- and high-level feature maps; the rough segmentation module comprises a search identification module and a pyramid fusion module, the search identification module is configured to perform global facial nerve search on the plurality of different- and high-level feature maps that are juxtaposed, to obtain a plurality of facial nerve feature maps, and the pyramid fusion module is configured to fuse the plurality of facial nerve feature maps to obtain a fused feature map; the fine segmentation module comprises a decoupling module and a spatial attention module, the decoupling module is configured to perform feature-space conversion on the fused feature map, to obtain a central body feature map, the central body feature map is combined with the low-level feature map to obtain an edge-detail feature map, the spatial attention module is configured to extract an attention feature from each of the central body feature map and the edge-detail feature map, to obtain extraction results, and the extraction results are fused and then are processed by the spatial attention module, to obtain a facial nerve segmentation image;
constructing a loss function, wherein the loss function comprises a difference between the fused feature map and an original label of the CT image sample, a difference between the facial nerve segmentation image and the original label of the CT image sample, a difference between a prediction result of processing the central body feature map by the spatial attention module and a body label, and a difference between a prediction result of processing the edge-detail feature map by the spatial attention module and a detail label; and
optimizing a parameter of the facial nerve segmentation model by using the sample set and the loss function, and then performing facial nerve segmentation on the inputted CT image by using the facial nerve segmentation model determined based on the parameter, to obtain a facial nerve segmentation image.