US 11,748,679 B2
Extended reality based immersive project workspace creation
Vibhu Saujanya Sharma, Bangalore (IN); Rohit Mehra, Delhi (IN); Vikrant Kaulgud, Pune (IN); Sanjay Podder, Thane (IN); and Adam Patten Burden, Tampa, FL (US)
Assigned to ACCENTURE GLOBAL SOLUTIONS LIMITED, Dublin (IE)
Filed by ACCENTURE GLOBAL SOLUTIONS LIMITED, Dublin (IE)
Filed on May 7, 2020, as Appl. No. 16/869,388.
Claims priority of application No. 201911018694 (IN), filed on May 10, 2019.
Prior Publication US 2020/0356917 A1, Nov. 12, 2020
Int. Cl. G06Q 10/0631 (2023.01); G06Q 10/105 (2023.01); G06Q 10/10 (2023.01); G06T 19/00 (2011.01); G06Q 10/067 (2023.01); G06F 16/2457 (2019.01); G06Q 10/101 (2023.01)
CPC G06Q 10/06316 (2013.01) [G06Q 10/103 (2013.01); G06Q 10/105 (2013.01); G06T 19/006 (2013.01); G06F 16/24578 (2019.01); G06Q 10/067 (2013.01); G06Q 10/101 (2013.01); G06T 2219/024 (2013.01)] 18 Claims
OG exemplary drawing
 
1. An extended reality based immersive project workspace creation apparatus comprising:
a workplace understanding analyzer, executed by at least one hardware processor, to:
obtain sensing data from at least one depth sensor of a mixed reality head mounted display device worn or utilized by a user to perform environment spatial mapping of an environment of the user;
perform the environment spatial mapping using the sensing data by utilizing a deep learning model to identify features in the environment of the user; and
analyze, based on the performance of the environment spatial mapping, the environment of the user to identify objects, a particular person, and at least one area in the environment of the user that is unobstructed by the objects and the particular person, wherein the particular person is a team member of a project and assigned a project task;
an insights prioritization analyzer, executed by the at least one hardware processor, to:
determine, based on the objects and the particular person identified in the environment of the user, a plurality of insights including an insight showing the project task being performed by the particular person; and
prioritize the plurality of insights based on a plurality of prioritization criteria; and
an insights rendering and interaction controller, executed by the at least one hardware processor, to:
based on the prioritization of the plurality of insights, render, in a three-dimensional (3D) graphical user interface of the mixed reality head mounted display device, the insight showing the project task being performed by the particular person, wherein the insight is rendered in an area of the 3D graphical user interface that overlays the at least one area in the environment of the user that is unobstructed by the objects and the particular person; and
control, based on a gesture-based interaction of the user with the rendered insight, the rendering of the insight by overlaying the insight relative to the user, wherein the gesture-based interaction includes at least one of hand manipulation, eye movement, or speech to manipulate the rendering of the insight.