| CPC G06F 18/24147 (2023.01) [G06N 3/08 (2013.01); G06N 20/00 (2019.01); G06T 7/10 (2017.01); G06T 17/00 (2013.01); G06T 2207/20084 (2013.01); G06T 2210/12 (2013.01)] | 17 Claims |

|
1. A method comprising:
receiving two-dimensional (2D) image data representing an object with at least a first part and a second part;
identifying, by a first machine learning model, a first portion of the 2D image data of the object that corresponds to the first part and a second portion of the 2D image data of the object that corresponds to the second part;
generating, by the first machine learning model, a first shape embedding representing the first portion of the 2D image data;
generating, by the first machine learning model, a second shape embedding representing the second portion of the 2D image data;
determining a first 3D model stored in non-transitory computer-readable memory, wherein a third shape embedding is associated with the first 3D model;
determining a second 3D model stored in the non-transitory computer-readable memory, wherein a fourth shape embedding is associated with the second 3D model;
determining a first distance in a shape embedding space between the first shape embedding and the third shape embedding;
determining a second distance in the shape embedding space between the first shape embedding and the fourth shape embedding, wherein the first distance is less than the second distance;
selecting the first 3D model to represent the first part based at least in part on the first distance being less than the second distance;
retrieving the first 3D model from the non-transitory computer-readable memory representing the first part using the first shape embedding;
retrieving a third 3D model representing the second part from the non-transitory computer-readable memory using the second shape embedding; and
generating output data comprising the first 3D model and the second 3D model.
|