US 12,475,636 B2
Rendering two-dimensional image of a dynamic three-dimensional scene
Moitreya Chatterjee, Somerville, MA (US); Suhas Lohit, Arlington, MA (US); and Pedro Miraldo, Cambridge, MA (US)
Assigned to Mitsubishi Electric Research Laboratories, Inc.
Filed by Mitsubishi Electric Research Laboratories, Inc., Cambridge, MA (US)
Filed on Jan. 16, 2024, as Appl. No. 18/413,640.
Prior Publication US 2025/0232518 A1, Jul. 17, 2025
Int. Cl. G06T 15/20 (2011.01); G06T 7/246 (2017.01); G06T 7/73 (2017.01)
CPC G06T 15/20 (2013.01) [G06T 7/246 (2017.01); G06T 7/73 (2017.01); G06T 2207/20084 (2013.01)] 20 Claims
OG exemplary drawing
 
1. An artificial intelligence (AI) image processing system employing a neural radiance field (NeRF) to render a two-dimensional (2D) image of a dynamic three-dimensional (3D) scene from different view angles and different instances of time based on an implicit representation of the 3D scene, the AI image processing system comprising: at least one processor and a memory having instructions stored thereon that cause the at least one processor of the AI image processing system to:
process coordinates of a point in a dynamic 3D scene with a recurrent neural network over a number of time steps indicated by a time instance of interest to produce motion information of the point at the time instance of interest;
process the motion information with a fully connected neural network to produce a displacement of the point from the coordinates in the dynamic 3D scene; and
process a displaced point from a view angle of interest with the NeRF trained for a static 3D scene to render the point on the 2D image of the dynamic 3D scene of the time instance of interest, wherein the displaced point is generated based on the displacement of the point.