US 11,941,817 B2
Systems and methods for platform agnostic whole body image segmentation
Jens Filip Andreas Richter, Lund (SE); Kerstin Elsa Maria Johnsson, Lund (SE); Erik Konrad Gjertsson, Lund (SE); and Aseem Undvall Anand, Queens, NY (US)
Assigned to EXINI Diagnostics AB, Lund (SE)
Filed by EXINI Diagnostics AB, Lund (SE)
Filed on Mar. 29, 2023, as Appl. No. 18/127,991.
Application 18/127,991 is a division of application No. 16/734,599, filed on Jan. 6, 2020, granted, now 11,657,508.
Claims priority of provisional application 62/934,305, filed on Nov. 12, 2019.
Claims priority of provisional application 62/907,158, filed on Sep. 27, 2019.
Claims priority of provisional application 62/870,210, filed on Jul. 3, 2019.
Claims priority of provisional application 62/863,608, filed on Jun. 19, 2019.
Claims priority of provisional application 62/837,941, filed on Apr. 24, 2019.
Claims priority of provisional application 62/789,155, filed on Jan. 7, 2019.
Prior Publication US 2023/0316530 A1, Oct. 5, 2023
Int. Cl. G06T 7/00 (2017.01); A61B 6/00 (2006.01); A61B 6/03 (2006.01); A61K 51/04 (2006.01); G06F 18/214 (2023.01); G06T 7/11 (2017.01); G06V 20/64 (2022.01); G06V 20/69 (2022.01); G06V 30/24 (2022.01); G16H 30/20 (2018.01); G16H 30/40 (2018.01); G16H 50/20 (2018.01); G16H 50/30 (2018.01); G16H 50/50 (2018.01)
CPC G06T 7/11 (2017.01) [A61B 6/032 (2013.01); A61B 6/037 (2013.01); A61B 6/463 (2013.01); A61B 6/466 (2013.01); A61B 6/481 (2013.01); A61B 6/505 (2013.01); A61B 6/507 (2013.01); A61B 6/5205 (2013.01); A61B 6/5241 (2013.01); A61B 6/5247 (2013.01); A61K 51/0455 (2013.01); G06F 18/214 (2023.01); G06V 20/64 (2022.01); G06V 20/695 (2022.01); G06V 20/698 (2022.01); G06V 30/2504 (2022.01); G16H 30/20 (2018.01); G16H 30/40 (2018.01); G16H 50/20 (2018.01); G16H 50/30 (2018.01); G16H 50/50 (2018.01); G06V 2201/031 (2022.01); G06V 2201/033 (2022.01)] 32 Claims
OG exemplary drawing
 
1. A method for automatically processing 3D images to automatically identify cancerous lesions within a subject, the method comprising:
(a) receiving, by a processor of a computing device, a 3D anatomical image of a subject obtained using an anatomical imaging modality, wherein the 3D anatomical image comprises a graphical representation of tissue within the subject;
(b) automatically identifying, by the processor, using one or more machine learning modules, for each of a plurality of target tissue regions, a corresponding target volume of interest (VOI) within the 3D anatomical image;
(c) determining, by the processor, a 3D segmentation map representing a plurality of 3D segmentation masks, each 3D segmentation mask representing a particular identified target VOI;
(d) receiving, by the processor, a 3D functional image of the subject obtained using a functional imaging modality;
(e) identifying, within the 3D functional image, one or more 3D volume(s), each corresponding to an identified target VOI, using the 3D segmentation map; and
(f) automatically detecting, by the processor, within at least a portion of the one or more 3D volumes identified within the 3D functional image, one or more hotspots determined to represent lesions based on intensities of voxels within the 3D functional image.