CPC G06Q 10/06316 (2013.01) [G06Q 10/103 (2013.01); G06Q 10/105 (2013.01); G06T 19/006 (2013.01); G06F 16/24578 (2019.01); G06Q 10/067 (2013.01); G06Q 10/101 (2013.01); G06T 2219/024 (2013.01)] | 18 Claims |
1. An extended reality based immersive project workspace creation apparatus comprising:
a workplace understanding analyzer, executed by at least one hardware processor, to:
obtain sensing data from at least one depth sensor of a mixed reality head mounted display device worn or utilized by a user to perform environment spatial mapping of an environment of the user;
perform the environment spatial mapping using the sensing data by utilizing a deep learning model to identify features in the environment of the user; and
analyze, based on the performance of the environment spatial mapping, the environment of the user to identify objects, a particular person, and at least one area in the environment of the user that is unobstructed by the objects and the particular person, wherein the particular person is a team member of a project and assigned a project task;
an insights prioritization analyzer, executed by the at least one hardware processor, to:
determine, based on the objects and the particular person identified in the environment of the user, a plurality of insights including an insight showing the project task being performed by the particular person; and
prioritize the plurality of insights based on a plurality of prioritization criteria; and
an insights rendering and interaction controller, executed by the at least one hardware processor, to:
based on the prioritization of the plurality of insights, render, in a three-dimensional (3D) graphical user interface of the mixed reality head mounted display device, the insight showing the project task being performed by the particular person, wherein the insight is rendered in an area of the 3D graphical user interface that overlays the at least one area in the environment of the user that is unobstructed by the objects and the particular person; and
control, based on a gesture-based interaction of the user with the rendered insight, the rendering of the insight by overlaying the insight relative to the user, wherein the gesture-based interaction includes at least one of hand manipulation, eye movement, or speech to manipulate the rendering of the insight.
|