US 12,008,723 B2
Depth plane selection for multi-depth plane display systems by user categorization
Samuel A. Miller, Hollywood, FL (US); Lomesh Agarwal, Fremont, CA (US); Lionel Ernest Edwin, Hollywood, FL (US); Ivan Li Chuen Yeoh, Wesley Chapel, FL (US); Daniel Farmer, Verdi, NV (US); Sergey Fyodorovich Prokushkin, Campbell, CA (US); Yonatan Munk, Fort Lauderdale, FL (US); Edwin Joseph Selker, Palo Alto, CA (US); Erik Fonseka, Biel (CH); Paul M. Greco, Parkland, FL (US); Jeffrey Scott Sommers, Mountain View, CA (US); Bradley Vincent Stuart, Fort Lauderdale, FL (US); Shiuli Das, Sunnyvale, CA (US); and Suraj Manjunath Shanbhag, Santa Clara, CA (US)
Assigned to Magic Leap, Inc., Plantation, FL (US)
Filed by Magic Leap, Inc., Plantation, FL (US)
Filed on Oct. 7, 2022, as Appl. No. 17/962,289.
Application 17/962,289 is a continuation of application No. 16/530,904, filed on Aug. 2, 2019, granted, now 11,468,640.
Claims priority of provisional application 62/875,474, filed on Jul. 17, 2019.
Claims priority of provisional application 62/714,649, filed on Aug. 3, 2018.
Prior Publication US 2023/0037046 A1, Feb. 2, 2023
Int. Cl. G06F 3/01 (2006.01); F21V 8/00 (2006.01); G02B 27/01 (2006.01); G06T 19/00 (2011.01)
CPC G06T 19/006 (2013.01) [G02B 6/0076 (2013.01); G02B 27/0172 (2013.01); G06F 3/013 (2013.01); G02B 2027/014 (2013.01); G06T 2200/24 (2013.01)] 20 Claims
OG exemplary drawing
 
1. An augmented reality display system configured to present virtual image content on a plurality of depth planes, the augmented reality display system comprising:
a waveguide configured to present the virtual image content by outputting light to a wearer, the waveguide further configured to pass light from the world into an eye of the wearer;
an imaging device configured to capture images of eyes of the wearer; and
at least one processor configured to:
determine whether the wearer is a calibrated user or a guest user based at least in part on images of the eyes of the wearer from the imaging device;
based on determining that the wearer is a calibrated user:
load pre-existing user depth plane switching calibration information;
identify a fixation point of the user based upon the pre-existing user depth plane switching calibration information; and
switch the virtual image content to be presented at a depth plane that corresponds to the fixation point; and
based on determining that the wearer is a guest user:
determine the interpupillary distance of the guest user;
calculate an estimated fixation point of the eyes of the guest user based upon the determined interpupillary distance, wherein the estimated fixation point is a point in three-dimensional space on which the eyes of the guest user are focused;
determine a plurality of system-defined volumes such that a field of view of the guest user is divided into the plurality of system-defined volumes, wherein each system-defined volume spans a different three-dimensional portion of the field of view, and wherein at least one of a size or a shape of the volumes is determined based on weighted factors that include an application most recently utilized by the guest user;
determine that the estimated fixation point is within a particular one of the plurality of system-defined volumes that includes multiple depth planes of the plurality of depth planes;
responsive to determining that the system-defined volume includes at least two virtual objects of the virtual image content to be presented, switch the virtual image content, including the at least two virtual objects, to be presented at a depth plane that corresponds to the estimated fixation point, and
responsive to determining that the system-defined volume includes a single virtual object of the virtual image content to be presented, switch the virtual image content, including the single virtual object, to be presented at a depth plane that is specified by information included in the virtual image content.