US 12,248,654 B1
Systems and methods for generating and enabling interaction with an extended reality user interface
Dale Alan Herigstad, London (GB); Jack Turpin, Los Angeles, CA (US); and Eric Fanghanel Santibanez, London (GB)
Assigned to ISOVIST LIMITED, London (GB)
Filed by Isovist Limited, London (GB)
Filed on Feb. 16, 2024, as Appl. No. 18/444,179.
Int. Cl. G06F 3/04815 (2022.01); G06F 3/01 (2006.01); G06F 3/0482 (2013.01); G06F 3/0485 (2022.01); G06F 3/0488 (2022.01)
CPC G06F 3/04815 (2013.01) [G06F 3/017 (2013.01); G06F 3/0482 (2013.01); G06F 3/0485 (2013.01); G06F 3/0488 (2013.01); G06F 2203/04808 (2013.01)] 20 Claims
OG exemplary drawing
 
1. A method for generating and enabling interaction with an extended reality user interface, the method comprising:
detecting a launch of an extended reality application on a computing device;
generating, in a first virtual zone of a plurality of virtual zones in an extended reality user interface, a first plurality of selectable elements in the extended reality application;
identifying a surface of a physical object in a real world environment of the computing device;
mapping the plurality of virtual zones to a plurality of physical zones on the surface of the physical object such that gestures made on a respective physical zone on the surface of the physical object trigger executions of interactions actions on a respective virtual zone mapped to the respective physical zone, wherein the mapping comprises:
assigning (1) a first physical zone to a first virtual zone situated on a left-side of the extended reality application, (2) a second physical zone to a second virtual zone situated on a right-side of the extended reality application, (3) a third physical zone to a third virtual zone situated on a bottom-side of the extended reality application, and (4) a fourth physical zone to a primary virtual zone situated on a center area of the extended reality application,
wherein relative positions of each of the plurality of physical zones on the physical surface match relative positions of each of the plurality of virtual zones on the extended reality application,
wherein each virtual zone of the plurality of virtual zones is configured to execute specific commands based on input gestures received via the plurality of physical zones such that the first virtual zone and the second virtual zone execute commands for scrolling and selecting listed elements, the primary virtual zone executes commands for viewing a selected element from the listed elements, and the third virtual zone executes commands for modifying the selected element;
detecting a first gesture made in contact with a first physical zone of the plurality of physical zones of the physical object; and
in response to determining that the first physical zone corresponds to the first virtual zone, executing a first interaction action on the first plurality of selectable elements based on the first gesture.