US 12,236,503 B2
Pet system attenuation correction method based on a flow model
Huafeng Liu, Hangzhou (CN); and Bo Wang, Hangzhou (CN)
Assigned to ZHEJIANG UNIVERSITY, Hangzhou (CN)
Appl. No. 17/788,732
Filed by ZHEJIANG UNIVERSITY, Hangzhou (CN)
PCT Filed Apr. 2, 2022, PCT No. PCT/CN2022/085009
§ 371(c)(1), (2) Date Jun. 23, 2022,
PCT Pub. No. WO2023/134030, PCT Pub. Date Jul. 20, 2023.
Claims priority of application No. 202210046152.X (CN), filed on Jan. 11, 2022.
Prior Publication US 2024/0169608 A1, May 23, 2024
Int. Cl. G06T 11/00 (2006.01); G06T 7/11 (2017.01); G06T 7/30 (2017.01)
CPC G06T 11/005 (2013.01) [G06T 7/11 (2017.01); G06T 7/30 (2017.01); G06T 11/006 (2013.01); G06T 2207/10104 (2013.01)] 8 Claims
OG exemplary drawing
 
1. A positron emission tomography (PET) system attenuation correction method based on a flow model, comprising the steps:
(1) collecting a computed tomography (CT) data of a scanning object, then injecting a tracer into the scanning object, and using a PET system to scan and collect the sinogram data of the PET;
(2) converting the CT image into an attenuation image at 511 KeV energy;
(3) reconstructing a non-attenuation-corrected PET image based on the sinogram data, and then using the PET image to calculate a non-attenuation-corrected standardized uptake value (SUV) image;
(4) based on the sinogram data and the CT attenuation image, reconstructing an attenuation-corrected PET image, and using the PET image to calculate an attenuation-corrected SUV image;
(5) establishing a flow model for attenuation correction, taking the non-attenuation-corrected SUV image as input, and using the attenuation-corrected SUV image as a ground truth label to train the flow model; and
(6) performing attenuation correction on the non-attenuation-corrected PET image reconstructed from the sinogram data by using the well-trained flow model;
wherein, the flow model is composed of a cascade of multiple reversible modules, and the reversible module consists of a reversible 1×1 convolutional layer and connected with a enhanced affine coupling layer; and
wherein, the 1×1 convolutional layer is used to realize the permutation operation, that is, to disrupt the channel order of the input image, and using the matrix W to multiply the input image and then the output is provided to the affine coupling layer, the expression of the matrix W is as follows:
W=PL(U+diag(v))
wherein, P is a permutation matrix, L is a lower triangular matrix with diagonal elements of 1, U is an upper triangular matrix with diagonal elements of 0, and v is a set of numerically learnable vectors.