US 12,075,193 B2
Emotes for non-verbal communication in a videoconference system
Gerard Cornelis Krol, Leiden (NL); and Erik Stuart Braund, Saugerties, NY (US)
Assigned to Katmai Tech Inc., New York, NY (US)
Filed by Katmai Tech Inc., New York, NY (US)
Filed on May 22, 2023, as Appl. No. 18/200,394.
Application 18/200,394 is a continuation of application No. 17/472,176, filed on Sep. 10, 2021, granted, now 11,695,901.
Application 17/472,176 is a continuation of application No. 17/211,579, filed on Mar. 24, 2021, granted, now 11,140,361, issued on May 10, 2021.
Prior Publication US 2023/0291869 A1, Sep. 14, 2023
This patent is subject to a terminal disclaimer.
Int. Cl. H04N 7/15 (2006.01); G06F 3/01 (2006.01); G06F 3/16 (2006.01)
CPC H04N 7/157 (2013.01) [G06F 3/011 (2013.01); G06F 3/017 (2013.01); G06F 3/167 (2013.01)] 20 Claims
OG exemplary drawing
 
1. A computer-implemented method for videoconferencing in a three-dimensional virtual environment, comprising:
receiving a video stream captured from a camera on a first device of a first user;
receiving a specification of an emote, the specification being input by the first user through the first device;
mapping the video stream onto a three-dimensional model of an avatar of the first user; and
from a perspective of a virtual camera of a second user, rendering for display to the second user through a second device the three-dimensional virtual environment including: (i) the mapped three-dimensional model of the avatar, and (ii) the emote attached to the three-dimensional model of the avatar, wherein the emote emits sound played by the second device to the second user, and wherein the sound is adjusted based on a distance of the avatar to the virtual camera of the second user within the three-dimensional virtual environment.
 
9. A non-transitory computer-readable device having instructions stored thereon that, when executed by at least one computing device, cause the at least one computing device to perform operations comprising:
receiving a video stream captured from a camera on a first device of a first user;
receiving a specification of an emote, the specification being input by the first user through the first device;
mapping the video stream onto a three-dimensional model of an avatar of the first user; and
from a perspective of a virtual camera of a second user, rendering for display to the second user through a second device a three-dimensional virtual environment including: (i) the mapped three-dimensional model of the avatar, and (ii) the emote attached to the three-dimensional model of the avatar, wherein the emote emits sound played by the second device to the second user, and wherein the sound is adjusted based on a distance of the avatar to the virtual camera of the second user within the three-dimensional virtual environment.
 
17. A device for videoconferencing in a three-dimensional virtual environment, comprising:
a processor;
a memory;
a network interface configured to (i) receive a video stream captured from a camera on a first device of a first user, and (ii) receive a specification of an emote, the specification being input by the first user through the first device;
a texture mapper configured to map the video stream onto a three-dimensional model of an avatar of the first user; and
a renderer configured to, from a perspective of a virtual camera of a second user, render for display to the second user through a second device the three-dimensional virtual environment including the mapped three-dimensional model of the avatar;
an emote renderer configured to, from the perspective of the virtual camera of the second user, render for display to the second user through the second device the emote attached to the three-dimensional model of the avatar, wherein the emote emits sound played by the second device to the second user, and wherein the sound is adjusted based on a distance of the avatar to the virtual camera of the second user within the three-dimensional virtual environment.