US 12,455,620 B2
Method and system for gaze-based control of mixed reality content
Jussi Ronkainen, Oulu (FI); and Jani Mantyjarvi, Oulunsalo (FI)
Assigned to InterDigital VC Holdings, Inc., Wilmington, DE (US)
Filed by InterDigital VC Holdings, Inc., Wilmington, DE (US)
Filed on Oct. 28, 2022, as Appl. No. 17/976,669.
Application 17/976,669 is a continuation of application No. 17/048,985, granted, now 11,513,590, previously published as PCT/US2019/027328, filed on Apr. 12, 2019.
Claims priority of provisional application 62/660,428, filed on Apr. 20, 2018.
Prior Publication US 2023/0048185 A1, Feb. 16, 2023
This patent is subject to a terminal disclaimer.
Int. Cl. G06F 3/01 (2006.01); G02B 27/00 (2006.01); G02B 27/01 (2006.01); G06T 19/00 (2011.01); G06T 19/20 (2011.01)
CPC G06F 3/013 (2013.01) [G02B 27/0093 (2013.01); G02B 27/0172 (2013.01); G06F 3/012 (2013.01); G06F 3/017 (2013.01); G06T 19/006 (2013.01); G06T 19/20 (2013.01); G02B 2027/014 (2013.01); G06T 2219/2004 (2013.01)] 21 Claims
OG exemplary drawing
 
1. A method comprising:
determining one or more objects within a view of a user of a three-dimensional (3D) scene;
generating a first visual guideline based on respective locations of the one or more objects,
wherein the first visual guideline defines a pathway through the 3D scene, the pathway traversing a plurality of depths of the 3D scene relative to the user;
displaying the first visual guideline in the view of the user of the 3D scene;
determining that a gaze point of the user indicates a position on the first visual guideline,
wherein the position on the first visual guideline corresponds to a first location at a first depth of the plurality of depths within the 3D scene; and
placing a first virtual object at the first location within the 3D scene.