US 12,354,300 B2
Inverting neural radiance fields for pose estimation
Tsung-Yi Lin, Sunnyvale, CA (US); Peter Raymond Florence, San Francisco, CA (US); Yen-Chen Lin, Cambridge, MA (US); and Jonathan Tilton Barron, Alameda, CA (US)
Assigned to GOOGLE LLC, Mountain View, CA (US)
Appl. No. 18/011,601
Filed by Google LLC, Mountain View, CA (US)
PCT Filed Nov. 15, 2021, PCT No. PCT/US2021/059313
§ 371(c)(1), (2) Date Dec. 20, 2022,
PCT Pub. No. WO2022/104178, PCT Pub. Date May 19, 2022.
Claims priority of provisional application 63/114,399, filed on Nov. 16, 2020.
Prior Publication US 2023/0230275 A1, Jul. 20, 2023
Int. Cl. G06T 7/70 (2017.01)
CPC G06T 7/70 (2017.01) [G06T 2207/20081 (2013.01); G06T 2207/20084 (2013.01); G06T 2207/30244 (2013.01)] 20 Claims
OG exemplary drawing
 
1. A computing system for determining camera pose, the computing system comprising:
one or more processors; and
one or more non-transitory computer-readable media that collectively store:
a machine-learned neural radiance field model that has been previously trained to model a scene; and
instructions that, when executed by the one or more processors, cause the computing system to perform operations, the operations comprising:
obtaining a subject image that was captured by a camera; and
for each of one or more pose update iterations:
obtaining a current estimated pose for the camera;
processing data descriptive of the current estimated pose with the machine-learned neural radiance field to generate one or more synthetic pixels of a synthetic image of the scene from the current estimated pose;
evaluating a loss function that compares the one or more synthetic pixels with one or more observed pixels included in the subject image that was captured by the camera; and
updating the current estimated pose for the camera based at least in part on a gradient of the loss function.