CPC G06T 19/006 (2013.01) [G06T 7/251 (2017.01); G06T 7/75 (2017.01); G06V 20/20 (2022.01); G06T 2207/10016 (2013.01)] | 12 Claims |
1. A method, comprising:
identifying a feature associated with a real-world object in a video of an Augmented Reality (AR) application (app) during AR session initiation of the AR app within a physical environment that includes the real-world object;
maintaining pixel coordinates for the feature within the video;
wherein maintaining the pixel coordinates further includes:
maintaining the pixel coordinates for the feature and additional pixel coordinates for additional features of the real-world object in a two-dimensional (2D) array;
maintaining sizes and dimensions of the feature based on a model that comprises sizes and dimensions of the real-world object;
wherein maintaining the sizes further includes:
maintaining the sizes and dimensions for the feature and additional sizes and dimensions for the additional features of the real-world object in a three-dimensional (3D) array; and
estimating a position and an orientation of the real-world object as depicted in the video based on the pixel coordinates, the sizes of the feature, and the dimensions of the feature;
wherein estimating further includes:
obtaining intrinsic camera parameters for a camera that is supplying the video from the AR app; and
wherein estimating further includes processing a Perspective-n-Point algorithm using the 2D array, the 3D array, and the intrinsic camera parameters to estimate the position and the orientation of the real-world object within the video.
|