US 11,854,069 B2
Personalized try-on ads
Itamar Berger, Hod Hasharon (IL); Gal Dudovitch, Tel Aviv (IL); and Ma'ayan Shuvi, Tel Aviv (IL)
Assigned to SNAP INC., Santa Monica, CA (US)
Filed by Snap Inc., Santa Monica, CA (US)
Filed on Jul. 16, 2021, as Appl. No. 17/305,909.
Prior Publication US 2023/0020218 A1, Jan. 19, 2023
Int. Cl. G06Q 30/0601 (2023.01); G06V 40/16 (2022.01); G06V 40/10 (2022.01); G06N 3/08 (2023.01); G06Q 30/0251 (2023.01); G06Q 30/0241 (2023.01)
CPC G06Q 30/0643 (2013.01) [G06N 3/08 (2013.01); G06Q 30/0271 (2013.01); G06Q 30/0276 (2013.01); G06V 40/103 (2022.01); G06V 40/161 (2022.01)] 20 Claims
OG exemplary drawing
 
1. A method comprising:
accessing, by one or more processors, content previously received by an application server from a first client device associated with a first user;
processing the content to identify a first image that depicts the first user wearing a first fashion item that has been previously stored by the first client device;
determining a first pose of the first user depicted in the first image;
select a target product type based on the first pose of the first user comprising:
selecting a first product type as the target product type in response to determining that the first pose corresponds to a whole body pose in which a whole-body fashion item is depicted in the first image; and
selecting a second product type as the target product type in response to determining that the first pose corresponds to a partial body pose in which a partial body fashion item is depicted in the first image;
searching, based on the selected target product type and the first fashion item, a plurality of products to identify a first product that corresponds to the selected target product type matching the first pose of the first user depicted in the first image and that includes one or more attributes associated with the first fashion item;
applying a trained neural network to the first image that has been previously stored by the first client device to generate a segmentation to distinguish a portion of the first user depicted in the first image from a background depicted in the first image, the trained neural network being trained by performing operations comprising:
receiving training data comprising a plurality of training monocular images and ground truth segmentations for each of the plurality of training monocular images;
applying the neural network to a first training monocular image of the plurality of training monocular images to estimate a segmentation of a garment worn by a given user depicted in the first training monocular image;
computing a deviation between the estimated segmentation and the ground truth segmentation associated with the first training monocular image; and
updating parameters of the neural network based on the computed deviation;
modifying the first image to generate an advertisement that depicts the first user wearing the first product in which the first product is placed on the portion of the first user in a manner that blends with the background of the first image based on the segmentation generated by the trained neural network; and
without receiving a request from the first user to view an advertisement, during a content browsing session being accessed by the first client device, causing, to be displayed, the advertisement that depicts the first user wearing the first product blended with the background of the first image that has been previously stored by the first client device.