| CPC G16H 30/40 (2018.01) [G06F 18/241 (2023.01); G06T 7/0012 (2013.01); G06V 10/25 (2022.01); G06V 10/454 (2022.01); G06V 10/774 (2022.01); G06V 10/82 (2022.01); G06T 2207/20081 (2013.01); G06T 2207/20084 (2013.01); G06T 2207/30061 (2013.01)] | 18 Claims |

|
1. A method, comprising:
receiving, by one or more processors, a first image based at least in part on an input from a user;
determining, by the one or more processors, a target process to be used in connection with analyzing the first image, the target process being determined based at least in part on a selection input by the user;
processing, by the one or more processors, the first image based at least in part on the target process, wherein:
a classification result for the first image, target data within the first image, and a target organ image, and a deformation relationship between the first image and a second image are obtained based at least in part on the processing of the first image; and
the target process is configured to call:
a first machine learning model in connection with processing the first image to obtain the classification result and the target data;
a second machine learning model in connection with processing the first image to obtain the target organ image;
a third machine learning model in connection with processing the first image and a second image to obtain the deformation relationship between the first image and a second image; and
the third machine learning model is configured to:
input the first image and the second image separately into at least two rigidity networks;
obtain a first rigid body parameter of the first image and a second rigid body parameter of the second image;
input the first rigid body parameter and the second rigid body parameter into an affine grid network; and
obtain the deformation relationship based at least in part on a processing of the affine grid network using the first rigid body parameter and the second rigid body parameter;
obtaining proportion data, comprising:
obtaining a registered image corresponding to the second image based on a processing of the second image, wherein the processing of the second image is based at least in part on the deformation relationship;
obtaining the third image and the fourth image based at least in part on a separate processing of the registered image based at least in part on the first machine learning model and the second machine learning model;
obtaining a first proportion of the target region in the target organ based at least in part on the first image and the second image; and
obtaining a second proportion of the target region in the target organ, the second proportion being obtained based at least in part on the third image and the fourth image; and
outputting, by the one or more processors, the classification result, the target data, the target organ image, the proportion data, and the deformation relationship.
|