US 11,929,169 B2
Personalized sensory feedback
Caleb Miles, Columbia, MO (US); Shikhar Kwatra, San Jose, CA (US); Jennifer L. Szkatulski, Rochester, MI (US); and Elio Andres Sanabria Echeverria, San Francisco, CA (US)
Assigned to Kyndryl, Inc., New York, NY (US)
Filed by KYNDRYL, INC., New York, NY (US)
Filed on Feb. 9, 2022, as Appl. No. 17/650,438.
Prior Publication US 2023/0253105 A1, Aug. 10, 2023
Int. Cl. G16H 40/67 (2018.01); G06N 20/00 (2019.01)
CPC G16H 40/67 (2018.01) [G06N 20/00 (2019.01)] 20 Claims
OG exemplary drawing
 
1. A computer-implemented method comprising:
maintaining user-specific parameters for provision of sensory feedback to a user in extended reality, the user-specific parameters applying to specific contextual situations and dictating levels of sensory feedback to provide via one or more stimulus devices in the specific contextual situations, wherein the maintaining comprises using feedback captured from user responses to generated prompts to the user, as input to train a Multi-Agent Reinforcement Learning (MARL) artificial intelligence (AI) model to identify the user-specific parameters applying to the specific contextual situations;
based on an ascertained contextual situation of the user interacting in a target extended reality environment, selecting a set of sensory feedback level parameters for provision of sensory feedback to the user in the target extended reality environment, wherein the sensory feedback is in response to generating a plurality of questions about a user's comfort level, wherein the selecting comprises applying the MARL AI model to features of the ascertained contextual situation and obtaining as an output of the MARL AI model a classification of the set of sensory feedback level parameters, wherein the sensory feedback is personalized to the user based on employing a reward function parametrized on factors including sentiment data and accelerometer data, wherein the sentiment data is detected by an NLP engine and the accelerometer data including the user's movement and orientation in space is captured with an accelerometer;
training the MARL AI model in real-time using the sensory feedback from a user apparatus producing the accelerometer data, the sensory feedback being received from the user in real-time including new reactions, iteratively adjusting the set of sensory feedback level parameters of the MARL AI model based on the sensory feedback; and
automatically controlling, in the provision of the sensory feedback to the user in the target extended reality environment, at least one stimulus device in the target extended reality environment based on one or more of the selected parameters, the automatically controlling comprising electronically communicating with the at least one stimulus device to control one or more stimuli provided to the user by the at least one stimulus device.