US 11,693,242 B2
Head-mounted display for virtual and mixed reality with inside-out positional, user body and environment tracking
Simon Fortin-Deschênes, Cupertino, CA (US); Vincent Chapdelaine-Couture, Cupertino, CA (US); Yan Côté, Cupertino, CA (US); and Anthony Ghannoum, Cupertino, CA (US)
Assigned to APPLE INC., Cupertino, CA (US)
Filed by Apple Inc., Cupertino, CA (US)
Filed on Oct. 29, 2021, as Appl. No. 17/514,082.
Application 17/514,082 is a continuation of application No. 17/032,141, filed on Sep. 25, 2020, granted, now 11,199,706.
Application 17/032,141 is a continuation of application No. 16/063,004, granted, now 10,838,206, issued on Nov. 17, 2020, previously published as PCT/CA2017/000033, filed on Feb. 20, 2017.
Claims priority of provisional application 62/296,829, filed on Feb. 18, 2016.
Prior Publication US 2022/0050290 A1, Feb. 17, 2022
Int. Cl. G02B 27/01 (2006.01); G06T 19/00 (2011.01); G02B 27/00 (2006.01)
CPC G02B 27/017 (2013.01) [G02B 27/0093 (2013.01); G06T 19/006 (2013.01); G02B 2027/014 (2013.01); G02B 2027/0134 (2013.01); G02B 2027/0138 (2013.01); G02B 2027/0187 (2013.01)] 21 Claims
OG exemplary drawing
 
1. A method comprising:
at a head-mounted device (HMD) including non-transitory memory, one or more processors, and a communications interface for communicating with first and second RGB camera sensors, first and second mono camera sensors, and a display;
obtaining, via the first and second RGB camera sensors, pass-through stereo view images of a physical environment;
obtaining, via the first and second mono camera sensors, stereo images;
obtaining a dense depth map associated with the physical environment;
performing embedded tracking based on the pass-through stereo view images from the first and second RGB camera sensors, the stereo images from the first and second mono camera sensors, and the dense depth map;
generating rendered graphics associated with virtual content based on the embedded tracking;
generating a display image by mixing the rendered graphics with the pass-through stereo view images from the first and second RGB camera sensors based on the dense depth map; and
displaying, via the display, the display image.