CPC G06T 19/006 (2013.01) [G06T 7/13 (2017.01); G06T 17/20 (2013.01); G06V 10/44 (2022.01); H04L 65/1069 (2013.01); H04L 65/403 (2013.01); H04N 7/157 (2013.01); G06T 2200/24 (2013.01); G06T 2219/024 (2013.01)] | 20 Claims |
1. A computer-implemented method comprising:
initiating a remote conference session;
receiving, from one or more depth sensors associated with a host device, depth data associated with a three-dimensional (3D) environment;
receiving, from one or more image sensors associated with the host device, image data associated with the 3D environment;
determining a collaboration area around at least one automatically detected portion of a first user in the 3D environment, wherein the collaboration area is at least one of (i) a predefined 3D shape encompassing the at least one automatically detected portion of the first user, or (ii) determined based on a boundary of the at least one automatically detected portion of the first user, and wherein the at least one automatically detected portion can be moved or re-sized responsive to user interaction to control a size or location of the collaboration area;
generating a first 3D representation of the at least one automatically detected portion of the first user based on the depth data, the image data, and the collaboration area; and
transmitting the first 3D representation for rendering within a first extended reality (XR) environment associated with the remote conference session.
|