CPC G09B 5/02 (2013.01) [G06F 16/2365 (2019.01); G06N 20/00 (2019.01); G06T 11/00 (2013.01); G09B 19/003 (2013.01)] | 18 Claims |
1. A method comprising:
capturing, by the one or more processors of the computer system, a user interaction with at least one surrounding object;
determining, by the one or more processors of the computer system, a context of the user interaction based on machine learning technology, wherein the determining the context includes determining whether a user is actively taking steps associated with failure or frustration, wherein the context includes a contextual situation, including visual recognition of a location or environment; and
feeding, by the one or more processors of the computer system, a machine learning ontology tree the determined context of user interaction;
identifying, by one or more processors of a computer system, a relevant tutorial to a user using the machine learning ontology tree in real time to match the determined context of the user interaction based on a frustration level of the user, wherein the relevant tutorial has been updated by the machine learning technology based on a previous user interaction to determine differences in structure between structural components to be interacted with by the user;
providing, by the one or more processors of the computer system, the identified relevant tutorial to an augmented reality device of the user;
overlaying, by the one or more processors of the computer system, the detected relevant tutorial on the augmented reality device of the user; and
highlighting, by the one or more processors of the computer system, the structural components to be interacted with by the user and the differences in structure during the detected relevant tutorial on the augmented reality device of the user.
|