CPC G09B 21/003 (2013.01) [G06F 3/014 (2013.01); G06F 3/016 (2013.01); G09B 21/006 (2013.01); G09B 21/009 (2013.01); H04S 7/303 (2013.01); H04S 2400/11 (2013.01)] | 20 Claims |
1. A method for presenting transformed environmental information to a user having an impairment in perceiving information that is received by the user in a specific sensory modality, the method comprising:
automatically accessing stored profile information associated with the user, the profile information comprising a sensory-input setting identifying that the user has the impairment in perceiving information that is received by the user in the specific sensory modality;
receiving, from an artificial reality (“XR”) feed, information associated with the user's environment, the received environmental information including specific environmental information of a specific virtual object, the specific environmental information being received from the XR feed in the specific sensory modality;
generating, based on the received information, a virtual model of the user's environment, the virtual model comprising identifications of locations of multiple real-world objects and identifications of locations of multiple virtual objects, including the specific virtual object, wherein a location of the user is tracked in relation to the virtual model;
transforming, based at least in part on the sensory-input setting, the virtual model and the tracked location of the user, at least some of the received specific environmental information of the specific virtual object to be more readily perceivable by the user, the transforming being performed by A) determining that the location of the user has changed relative to the specific virtual object in the virtual model and B) one or more of:
generating spatial auditory data with characteristics based on one or more of: the distance between the user and the specific virtual object, the direction of the specific virtual object relative to the user, the size of the specific virtual object, the color of the specific virtual object, or any combination thereof;
generating haptic data, configured for output to a wearable glove, based on a location of the wearable glove touching a location of the specific virtual object;
generating haptic data, configured for output to an oral device, that signifies a location of the specific virtual object relative to the user;
generating visual data with characteristics based on one or more of: the distance between the user and the specific virtual object, the direction of the specific virtual object relative to the user, the size of the specific virtual object, the color of the specific virtual object, or any combination thereof; or
any combination thereof; and
presenting the transformed environmental information to the user.
|