US 12,440,416 B2
Double-blinded, randomized trial of augmented reality low-vision mobility and grasp aid
Mark S. Humayun, Los Angeles, CA (US); and Anastasios Angelopoulos, Los Angeles, CA (US)
Assigned to University of Southern California, Los Angeles, CA (US)
Appl. No. 17/298,589
Filed by UNIVERSITY OF SOUTHERN CALIFORNIA, Los Angeles, CA (US)
PCT Filed Dec. 2, 2019, PCT No. PCT/US2019/063924
§ 371(c)(1), (2) Date May 30, 2021,
PCT Pub. No. WO2020/113202, PCT Pub. Date Jun. 4, 2020.
Claims priority of provisional application 62/773,979, filed on Nov. 30, 2018.
Prior Publication US 2022/0015982 A1, Jan. 20, 2022
Int. Cl. A61H 3/06 (2006.01); G02B 27/01 (2006.01); G06T 7/50 (2017.01); G06T 7/90 (2017.01); G06T 19/00 (2011.01)
CPC A61H 3/061 (2013.01) [G02B 27/0101 (2013.01); G02B 27/017 (2013.01); G06T 7/50 (2017.01); G06T 7/90 (2017.01); G06T 19/006 (2013.01); A61H 2201/165 (2013.01); G02B 2027/0127 (2013.01); G02B 2027/0138 (2013.01); G06T 2210/41 (2013.01); G06T 2219/2012 (2013.01)] 22 Claims
OG exemplary drawing
 
1. An augmented reality system for providing depth perspective to a low-vision user, the augmented reality system comprising:
a sensor system that provides spatial data of objects in a surrounding environment of the low-vision user, wherein the sensor system includes at least one electromagnetic sensor, optical sensor, or video sensor;
a computer processor system that calculates spatial information of the objects from the spatial data received from the sensor system, the computer processor system determining a depth-to-color mapping in which distance of objects from the low-vision user is mapped to a predetermined viewable representation, wherein the depth-to-color mapping includes a colored wireframe with edge-enhancement; and
a head-mountable display that displays the depth-to-color mapping to a low-vision user, wherein distances of the objects from the low-vision user are rendered to allow at least partial viewability of the objects by the low-vision user and wherein the depth-to-color mapping assists in identifying objects by applying a pseudocolor map thereby facilitating navigation and grasp by the low-vision user, the pseudocolor map including discrete color changes to indicate varying distances of objects thereby ensuring partial viewability and object detection for the low-vision user, the computer processor system further being configured to construct a triangular point mesh using a geometric shader rather than continuously rendering a surface over the real world, wherein only an object's edges are represented with a wireframe and therefore do not obstruct text written on an object with a color overlay, and wherein the depth-to-color mapping is limited to objects within a maximum distance of 6 feet from the low-vision user to prevent sensory overload.