US 12,444,315 B2
Visualizing causality in mixed reality for manual task learning
Karthik Ramani, West Lafayette, IN (US); Jingyu Shi, West Lafayette, IN (US); and Rahul Jain, West Lafayette, IN (US)
Assigned to Purdue Research Foundation, West Lafayette, IN (US)
Filed by Purdue Research Foundation, West Lafayette, IN (US)
Filed on Oct. 23, 2023, as Appl. No. 18/492,156.
Claims priority of provisional application 63/479,810, filed on Jan. 13, 2023.
Claims priority of provisional application 63/418,609, filed on Oct. 23, 2022.
Prior Publication US 2024/0135831 A1, Apr. 25, 2024
Prior Publication US 2024/0233563 A9, Jul. 11, 2024
Int. Cl. G09B 5/00 (2006.01); G06F 3/01 (2006.01); G06T 19/00 (2011.01); G09B 5/02 (2006.01); G09B 19/00 (2006.01)
CPC G09B 5/02 (2013.01) [G06F 3/011 (2013.01); G06T 19/006 (2013.01); G09B 19/003 (2013.01); G06T 2200/24 (2013.01)] 15 Claims
OG exemplary drawing
 
1. A method for generating instructional content, the method comprising:
storing, in a memory, virtual models representing virtual hands and at least one virtual object;
generating, with a processor, a sequence of pose data for the virtual hands and the at least one virtual object by recording, with at least one sensor, a demonstration by a user of a task within a real-world environment in which the user interacts with at least one real-world object corresponding to the at least one virtual object;
segmenting, with the processor, the sequence of pose data with at least three levels of granularity to define (i) a plurality of segments of the sequence of pose data, each respective segment of the plurality of segments corresponding to a respective step of a plurality of steps of the task, (ii) a plurality of groups of segments from the plurality of segments, each group of segments corresponding to a respective group of steps from the plurality of steps, and (iii) a respective plurality of subsegments for each respective step of the plurality of steps, each subsegment corresponding to a sub-step of the respective step;
defining, with the processor, (i) first causal relationships between steps of the plurality of steps of the task, (ii) second causal relationships between the groups of steps from the plurality of steps, and (iii) third causal relationships between the sub-steps of steps from the plurality of steps; and
generating, with the processor, graphical content configured to be displayed in an augmented reality graphical user interface to instruct a further user how to perform the task, based on the segmented sequence of pose data and the defined causal relationships, the graphical content being generated by rendering the plurality of virtual models posed according to the sequence of pose data for the virtual hands and the at least one virtual object, the graphical content including (i) a plurality of first graphical depictions of each step from the plurality of steps, (ii) a plurality of second graphical depictions of each group of steps from the plurality of steps, and (iii) a plurality of third graphical depictions of each sub-step of each step from the plurality of steps.