CPC G06Q 30/0643 (2013.01) [G06N 3/08 (2013.01); G06Q 30/0271 (2013.01); G06Q 30/0276 (2013.01); G06V 40/103 (2022.01); G06V 40/161 (2022.01)] | 20 Claims |
1. A method comprising:
selecting a first product type as a target product type in response to determining that a first pose corresponds to a whole first body pose in which a whole-body first type of fashion item is depicted in a first image;
selecting a second product type as the target product type in response to determining that the first pose corresponds to a partial body pose in which a second type of fashion item comprising a partial body fashion item is depicted in the first image;
searching, based on the target product type and the first type of fashion item, a plurality of products to identify a first product that corresponds to the target product type matching the first pose of a first person depicted in the first image and that includes one or more attributes associated with the first type of fashion item;
applying a trained neural network to the first image that has been previously stored by a first device to generate a segmentation to distinguish a portion of the first person depicted in the first image from a background depicted in the first image, the trained neural network being trained by performing operations comprising:
receiving training data comprising a plurality of training monocular images and ground truth segmentations for each of the plurality of training monocular images;
applying a neural network to a first training monocular image of the plurality of training monocular images to estimate a segmentation of a garment worn by a given person depicted in the first training monocular image;
computing a deviation between the estimated segmentation and the ground truth segmentations associated with the first training monocular image;
updating parameters of the neural network based on the computed deviation; and
in response to applying the trained neural network to the first image to generate the segmentation, modifying the first image to generate a content item that depicts the first person wearing the first product in which the first product placed on the portion of the first person in a manner that blends with the background of the first image, the modifying of the first image to generate the content item comprising:
calculating characteristic points for a set of elements of the first person to generate a mesh based on the calculated characteristic points;
generating one or more areas on the mesh of the first person;
aligning the one or more areas of the first person with one or more elements of a first augmented reality representation of the first product with a position; and
modifying one or more visual properties of the one or more areas to cause the first device to display the first augmented reality representation within the first image at an individual display position relative to the display position of the first person.
|