US 12,282,017 B1
HP spherical deformation diagnosis model and construction method thereof
Hengjun Gao, Shanghai (CN); Zishao Zhong, Shanghai (CN); Xin Wang, Shanghai (CN); Xiaoyan Zhang, Shanghai (CN); and Jingjing Liu, Shanghai (CN)
Assigned to SHANGHAI OUTDO BIOTECH CO., LTD., Shanghai (CN)
Appl. No. 18/846,648
Filed by SHANGHAI OUTDO BIOTECH CO., LTD., Shanghai (CN)
PCT Filed Aug. 9, 2022, PCT No. PCT/CN2022/111035
§ 371(c)(1), (2) Date Sep. 12, 2024,
PCT Pub. No. WO2023/231176, PCT Pub. Date Dec. 7, 2023.
Claims priority of application No. 202210595882.5 (CN), filed on May 30, 2022.
Int. Cl. G06V 10/776 (2022.01); G01N 33/569 (2006.01); G06T 5/20 (2006.01); G06T 5/50 (2006.01); G06T 5/60 (2024.01); G06T 5/94 (2024.01); G06T 7/00 (2017.01); G06T 7/11 (2017.01); G06T 7/155 (2017.01); G06T 7/90 (2017.01); G06V 10/30 (2022.01); G06V 10/34 (2022.01); G06V 10/36 (2022.01); G06V 10/56 (2022.01); G06V 10/764 (2022.01); G06V 10/82 (2022.01); G06V 10/94 (2022.01); G06V 20/69 (2022.01); G16H 30/40 (2018.01); G16H 50/20 (2018.01)
CPC G01N 33/56922 (2013.01) [G06T 5/20 (2013.01); G06T 5/50 (2013.01); G06T 5/60 (2024.01); G06T 5/94 (2024.01); G06T 7/0014 (2013.01); G06T 7/11 (2017.01); G06T 7/155 (2017.01); G06T 7/90 (2017.01); G06V 10/30 (2022.01); G06V 10/34 (2022.01); G06V 10/36 (2022.01); G06V 10/56 (2022.01); G06V 10/764 (2022.01); G06V 10/776 (2022.01); G06V 10/82 (2022.01); G06V 10/94 (2022.01); G06V 20/695 (2022.01); G06V 20/698 (2022.01); G16H 30/40 (2018.01); G16H 50/20 (2018.01); G01N 2469/10 (2013.01); G06T 2207/10024 (2013.01); G06T 2207/10056 (2013.01); G06T 2207/20028 (2013.01); G06T 2207/20036 (2013.01); G06T 2207/20081 (2013.01); G06T 2207/20084 (2013.01); G06T 2207/30024 (2013.01)] 13 Claims
OG exemplary drawing
 
1. A construction method for an HP spherical deformation diagnosis model, comprising steps of:
S1, taking a plurality of immunochemical staining images having a Helicobacter pylori (HP) spherical deformation as a training set and a validation set respectively;
S2, manually labeling a Helicobacter pylori morphology in the training set;
S3, subjecting images in the training set to contrast enhancement, image filtering, and HP staining extraction operations; wherein the contrast enhancement comprises firstly transforming an original image from a red, green, blue (RGB) color model to a hue-intensity-saturation (HIS) color model, and a hue, a saturation, and an intensity are separated in the HIS color model, and then using a piecewise linear transformation method on a obtained brightness component; wherein
the piecewise linear transformation method comprises substituting Eq. 1-1 into Eq. 1-2, in a case where X, min, and max are known, calculating Y to obtain a large number of Y corresponding to X, and plotting X and Y as horizontal and vertical coordinates to obtain two pairs of turning points (X1, Y1) and (X2, Y2); enhancing a grayscale region of interest in the original image by the piecewise linear transformation method, i.e. stretching to a (0, 1) interval, thereby improving the original image;
k=1/(max−min)  Eq. 1-1
Y=k(X−min)  Eq. 1-2
wherein k is a stretching coefficient; max and min are 4-6% of a pixel value of a point with a highest pixel value on the original image and 4-6% of a pixel value of a point with a lowest pixel value on the original image, respectively; X is a color hue saturation of the original image;
and Y is a color hue saturation of an adjusted image;
the image filtering adopts a bilateral filtering technology to filter an improved image processed by the piecewise linear transformation method;
the HP staining extraction adopts a deconvolution algorithm to enhance diaminobenzidine (DAB) staining information about the improved image after the image filtering, suppress other staining information, and transform the same into grey levels according to an amount of DAB staining, so as to obtain an image for an artificial intelligence (AI) identification;
S4, training a U-Net neural network with the image for the AI identification;
S5, performing a k-fold cross-validation and an optimization to obtain the HP spherical deformation diagnosis model;
S6, performing an HP spherical deformation identification on an image in the validation set using the HP spherical deformation diagnosis model, and performing a consistency comparison with a manual interpretation result; wherein when a consistency reaches above 70%, the HP spherical deformation diagnosis model is successfully constructed;
the deconvolution algorithm adopted in the step S3 is a color deconvolution technique, involving the following Eqs: Eq. 2-2, Eq. 2-3, Eq. 2-4, and Eq. 2-5;
ODc=−log10(Ic/Io,c)=A×Cc  Eq. 2-2
φc=exp(−c′)  Eq. 2-3
φc′=Dθc  Eq. 2-4
φc=10−ODc  Eq. 2-5
wherein A represents an action amount of a stain; Io,c represents an intensity of an incident light; Ic represents an intensity of a light after passing through a sample, and subscript c represents a detected channel; Cc represents a light absorption factor; ODc represents an optical density; φc represents a red, green, and blue optical density function of each pixel; φc′ represents a red, green, and blue optical density function of each pixel newly defined by staining with stained tissue sections; S represents a matrix composed of absorption factors respectively corresponding to three channels of RGB of each stain; and D is an inverse matrix of S; and θc=−log (φc);
in a case where S, A, and C, are known, ODc is calculated by Eq. 2-2; φc is calculated by Eq. 2-5; φc′ is calculated by Eq. 2-3; θc is calculated by θc=−log (φc); D is calculated by substituting φc′ and θc into Eq. 2-4;
D is a 3×3 matrix composed of a staining amount of hematoxylin, eosin, and DAB staining in an RGB space after the deconvolution algorithm; a DAB staining amount in the D matrix is taken and transformed into a grey level to obtain the image for the AI identification; and
the step S4 specifically comprises: training the U-Net neural network using a Faster region-based convolutional neural network (R-CNN) algorithm on the image for the AI identification; and adopting VGG16 as a base network model for the Faster R-CNN algorithm.