CPC G06T 17/00 (2013.01) [G06T 19/006 (2013.01); G06T 2210/56 (2013.01)] | 20 Claims |
1. A method comprising:
accessing, by one or more computing devices having one or more processors and memory, data of a three-dimensional (3D) point cloud corresponding to an environment associated with a client device;
determining, by the one or more computing devices, and based on a global positioning system signal, a first position estimate for an image sensor of at least one of the client device or a companion device associated with the client device, wherein the companion device is separate from the client device;
accessing, by the one or more computing devices, an image of the environment captured by the image sensor;
causing, by the one or more computing devices, the first position estimate and the image to be communicated together as part of a first communication from the client device to a cloud server computer;
receiving, by the one or more computing devices and in response to the first position estimate, a set of structure facade data describing one or more structure facades associated with the environment;
identifying, by the one or more computing devices and using the set of structure facade data, a first structure facade portion of the image corresponding to first structure facade data of the set of structure facade data;
obtaining, by the one or more computing devices from the cloud server computer, a second position estimate for the image sensor, the second position estimate being based on a portion of a set of key points of the 3D point cloud matching to the image;
determining, by the one or more computing devices and based on the first structure facade portion of the image, a third position estimate of the image sensor;
obtaining, by the one or more computing devices from the cloud server computer, a fourth position estimate of the image sensor based on a second structure facade portion of the image, wherein the second structure facade portion corresponds to second structure facade data that is determined based on a set of detailed structure facade data having a higher level of detail than the set of structure facade data;
generating, by the one or more computing devices, an updated position estimate based on at least one of the third position estimate and the fourth position estimate;
generating, by the one or more computing devices, a model of a virtual object within the 3D point cloud; and
generating, by the one or more computing devices, an augmented reality image comprising the virtual object in the environment using the second position estimate for the image sensor, the model of the virtual object within the 3D point cloud, and a match of the portion of the set of key points of the 3D point cloud to the image, wherein the augmented reality image is further generated using the updated position estimate along with the second position estimate to align the virtual object within the image.
|