US 12,290,705 B2
Real-time motion monitoring using deep learning
Philip P. Novosad, Montreal (CA); and Silvain Beriault, Longueuil (CA)
Assigned to Elekta Limited, Montreal (CA)
Appl. No. 17/906,417
Filed by Elekta Limited, Montreal (CA)
PCT Filed Mar. 10, 2021, PCT No. PCT/CA2021/050316
§ 371(c)(1), (2) Date Sep. 15, 2022,
PCT Pub. No. WO2021/184107, PCT Pub. Date Sep. 23, 2021.
Claims priority of provisional application 62/991,356, filed on Mar. 18, 2020.
Prior Publication US 2023/0126640 A1, Apr. 27, 2023
Int. Cl. G06K 9/00 (2022.01); A61N 5/10 (2006.01); G06T 7/00 (2017.01); G06T 7/262 (2017.01)
CPC A61N 5/1049 (2013.01) [G06T 7/0016 (2013.01); G06T 7/262 (2017.01); A61N 2005/1055 (2013.01); A61N 2005/1061 (2013.01); G06T 2207/10016 (2013.01); G06T 2207/10081 (2013.01); G06T 2207/10088 (2013.01); G06T 2207/20081 (2013.01); G06T 2207/20084 (2013.01); G06T 2207/30004 (2013.01)] 20 Claims
OG exemplary drawing
 
1. A method for estimating a patient state comprising:
receiving a 3D reference volume;
receiving a real-time stream of images from an image acquisition device;
using a hardware processor to implement a patient state generator network to:
encode the 3D reference volume using a 3D encoder branch of the patient state generator network;
encode the real-time stream of images using a 2D encoder branch of the patient state generator network;
combine the encoded 3D reference volume and the encoded real-time stream of images; and
estimate a 3D spatial transform that maps the 3D reference volume to a current patient state using a 3D decoder branch of the patient state generator network.