US 12,008,764 B2
Systems, devices, and methods for image processing to generate an image having predictive tagging
Gregory Johnson, Seattle, WA (US); Chawin Ounkomol, Seattle, WA (US); Forrest Collman, Seattle, WA (US); and Sharmishtaa Seshamani, Seattle, WA (US)
Assigned to ALLEN INSTITUTE, Seattle, WA (US)
Filed by ALLEN INSTITUTE, Seattle, WA (US)
Filed on Feb. 16, 2023, as Appl. No. 18/170,076.
Application 17/148,192 is a division of application No. 16/304,021, granted, now 10,935,773, issued on Mar. 2, 2021, previously published as PCT/US2018/045840, filed on Aug. 8, 2018.
Application 18/170,076 is a continuation of application No. 17/148,192, filed on Jan. 13, 2021, granted, now 11,614,610.
Claims priority of provisional application 62/651,765, filed on Apr. 3, 2018.
Claims priority of provisional application 62/647,456, filed on Mar. 23, 2018.
Claims priority of provisional application 62/568,749, filed on Oct. 5, 2017.
Claims priority of provisional application 62/560,043, filed on Sep. 18, 2017.
Claims priority of provisional application 62/543,333, filed on Aug. 9, 2017.
Prior Publication US 2023/0281825 A1, Sep. 7, 2023
Int. Cl. G06T 7/11 (2017.01); G02B 21/00 (2006.01); G06N 3/045 (2023.01); G06N 3/08 (2023.01); G06N 20/20 (2019.01); G06T 7/174 (2017.01); G06T 7/187 (2017.01); G06V 10/25 (2022.01); G06V 10/50 (2022.01); G06V 10/764 (2022.01); G06V 20/69 (2022.01)
CPC G06T 7/11 (2017.01) [G02B 21/008 (2013.01); G06N 3/045 (2023.01); G06N 3/08 (2013.01); G06N 20/20 (2019.01); G06T 7/174 (2017.01); G06T 7/187 (2017.01); G06V 10/25 (2022.01); G06V 10/50 (2022.01); G06V 10/764 (2022.01); G06V 20/695 (2022.01); G06V 20/698 (2022.01); G06T 2207/10061 (2013.01); G06T 2207/10064 (2013.01); G06T 2207/30024 (2013.01)] 3 Claims
OG exemplary drawing
 
1. A computing device, comprising:
a communication interface configured to receive microscopy images;
a processor; and
a non-transitory computer-readable medium communicatively coupled to the processor and storing computer-executable instructions that, when executed by the processor, causes the processor to:
receive, via the communication interface, a first set of three-dimensional (3D) microscopy images and a second set of 3D microscopy images, wherein the first set of 3D microscopy images are 3D confocal laser scanning microscopy (CLSM) fluorescence images of a plurality of tissue samples each having a plurality of cells, and wherein the second set of 3D microscopy images are 3D transmitted light images of the same plurality of tissue samples, wherein fluorescence labeling is applied to the plurality of cells in the first set of 3D microscopy images, and wherein no fluorescence labeling is included in the second set of 3D microscopy images;
generate a neural network configured to convert a first type of image that is a 3D transmitted light image of cells to a second type of image that is a predicted 3D CLSM fluorescence image of the cells, wherein no fluorescence labeling is included in the first type of image, and wherein the instructions cause the processor to generate the neural network by training the neural network based on the first set of 3D microscopy images and the second set of 3D microscopy images;
receive, after the neural network is generated and trained, an additional 3D microscopy image that is a transmitted light image of an additional tissue sample having a plurality of cells, wherein no fluorescence labeling is included in the additional 3D microscopy image; and
generate, with the neural network and the additional 3D microscopy image, a predicted 3D CLSM fluorescence image that includes predicted fluorescence labeling of the plurality of cells for the additional tissue sample.