US 11,734,866 B2
Controlling interactive fashion based on voice
Itamar Berger, Hod Hasharon (IL); Gal Dudovitch, Tel Aviv (IL); Gal Sasson, Kibbutz Ayyelet Hashahar (IL); Ma'ayan Shuvi, Tel Aviv (IL); and Matan Zohar, Rishon LeZion (IL)
Assigned to SNAP INC., Santa Monica, CA (US)
Filed by Snap Inc., Santa Monica, CA (US)
Filed on Sep. 13, 2021, as Appl. No. 17/447,509.
Prior Publication US 2023/0078483 A1, Mar. 16, 2023
Int. Cl. G06T 11/60 (2006.01); G06T 7/11 (2017.01); G06T 11/40 (2006.01); G10L 15/26 (2006.01); G10L 15/22 (2006.01); G06V 20/20 (2022.01); H04L 51/42 (2022.01)
CPC G06T 11/60 (2013.01) [G06T 7/11 (2017.01); G06T 11/40 (2013.01); G06V 20/20 (2022.01); G10L 15/22 (2013.01); G10L 15/26 (2013.01); G06T 2207/30196 (2013.01); G06V 2201/09 (2022.01); H04L 51/42 (2022.05)] 20 Claims
OG exemplary drawing
 
1. A method comprising:
receiving, by one or more processors of a client device, an image that includes a depiction of a person wearing a fashion item;
generating, by the one or more processors, a segmentation of the fashion item worn by the person depicted in the image;
receiving voice input associated with the person depicted in the image;
in response to receiving the voice input, generating one or more augmented reality elements representing the voice input; and
applying the one or more augmented reality elements to the fashion item worn by the person based on the segmentation of the fashion item worn by the person.