US 11,741,643 B2
Reconstruction of dynamic scenes based on differences between collected view and synthesized view
Hyojin Kim, Davis, CA (US); Rushil Anirudh, Dublin, CA (US); Kyle Champley, Pleasanton, CA (US); Kadri Aditya Mohan, Newark, CA (US); Albert William Reed, Los Lunas, NM (US); and Suren Jayasuriya, Tempe, AZ (US)
Assigned to Lawrence Livermore National Security, LLC, Livermore, CA (US); and Arizona Board of Regents on Behalf of Arizona State University, Scottsdale, AZ (US)
Filed by LAWRENCE LIVERMORE NATIONAL SECURITY, LLC, Livermore, CA (US); and ARIZONA BOARD OF REGENTS ON BEHALF OF ARIZONA STATE UNIVERSITY, Scottsdale, AZ (US)
Filed on Mar. 22, 2021, as Appl. No. 17/208,849.
Prior Publication US 2022/0301241 A1, Sep. 22, 2022
Int. Cl. G06T 11/00 (2006.01); G06T 7/207 (2017.01); G06T 15/08 (2011.01)
CPC G06T 11/008 (2013.01) [G06T 7/207 (2017.01); G06T 15/08 (2013.01); G06T 2207/20084 (2013.01); G06T 2207/30181 (2013.01); G06T 2210/41 (2013.01)] 20 Claims
OG exemplary drawing
 
1. A method performed by one or more computing systems for generating a four-dimensional (4D) representation of a three-dimensional (3D) scene that has motion, the 4D representation representing the motion of the scene, the method comprising:
accessing a collected view of the scene, the collected view representing attenuation of an electromagnetic signal transmitted through the scene at various angles; and
for each of a plurality of iterations,
applying a 3D representation generator to generate an initial 3D representation of the scene for the iteration, the 3D representation generator having scene weights, a 3D representation having voxels that each represent a portion of the scene;
applying a 4D motion generator to generate a 4D motion field as a sequence of 3D motion fields for the iteration, a 3D motion field indicating location of voxels of the initial 3D representation, the 4D motion generator having motion weights;
applying a 4D representation generator to generate a 4D representation having a sequence of 3D representations based on the initial 3D representation and the 4D motion field;
generating a synthesized view of the scene from the generated 4D representation; and
adjusting the scene weights and the motion weights based on differences between the collected view and the synthesized view.