US 12,443,324 B2
Systems, methods, and graphical user interfaces for interacting with augmented and virtual reality environments
Mark K. Hauenstein, San Francisco, CA (US); Joseph A. Malia, Isle of Wight (GB); Julian K. Missig, Burlingame, CA (US); Matthaeus Krenn, Sunnyvale, CA (US); and Jeffrey T. Bernstein, San Francisco, CA (US)
Assigned to APPLE INC., Cupertino, CA (US)
Filed by Apple Inc., Cupertino, CA (US)
Filed on Apr. 27, 2023, as Appl. No. 18/140,557.
Application 18/140,557 is a continuation of application No. 17/488,191, filed on Sep. 28, 2021, granted, now 11,740,755.
Application 17/488,191 is a continuation of application No. 16/116,276, filed on Aug. 29, 2018, granted, now 11,163,417, issued on Nov. 2, 2021.
Claims priority of provisional application 62/564,984, filed on Sep. 28, 2017.
Claims priority of provisional application 62/553,063, filed on Aug. 31, 2017.
Prior Publication US 2023/0305674 A1, Sep. 28, 2023
Int. Cl. G06F 3/04815 (2022.01); G06F 3/01 (2006.01); G06F 3/0484 (2022.01); G06F 3/04845 (2022.01); G06F 3/04883 (2022.01); G06T 19/00 (2011.01)
CPC G06F 3/04815 (2013.01) [G06F 3/011 (2013.01); G06F 3/012 (2013.01); G06F 3/014 (2013.01); G06F 3/017 (2013.01); G06F 3/0484 (2013.01); G06F 3/04845 (2013.01); G06F 3/04883 (2013.01); G06T 19/006 (2013.01)] 31 Claims
OG exemplary drawing
 
1. A method, comprising:
at a computer system having, or in communication with, a display generation component, one or more cameras, one or more attitude sensors, and an input device:
displaying, via the display generation component, a simulated environment in a first viewing mode oriented relative to a physical environment of the computer system, wherein displaying the simulated environment in the first viewing mode includes:
displaying a live view, from the one or more cameras, of the physical environment of the computer system, including a representation of one or more physical objects in the physical environment of the computer system captured by the one or more cameras and a first virtual user interface object in a virtual model that is displayed at a first respective location in the simulated environment that is associated with the physical environment of the computer system; and
displaying the first virtual user interface object with a fixed spatial relationship between the first virtual user interface object and the physical environment of the computer system;
while displaying the simulated environment in the first viewing mode:
detecting, via the one or more attitude sensors, a first change in attitude of at least a portion of the computer system relative to the physical environment of the computer system; and
in response to detecting the first change in the attitude of the portion of the computer system, changing an appearance of the first virtual user interface object in the virtual model while maintaining the fixed spatial relationship between the first virtual user interface object and the physical environment of the computer system;
after changing the appearance of the first virtual user interface object based on the first change in the attitude of the portion of the computer system, detecting, via the input device, a first gesture that corresponds to an interaction with the simulated environment in the first viewing mode;
in response to detecting the first gesture that corresponds to the interaction with the simulated environment, performing an operation in the simulated environment that corresponds to the first gesture, the operation including: in accordance with a determination that the first gesture met mode change criteria, wherein the mode change criteria include a requirement that the first gesture corresponds to an input that changes a spatial parameter of the simulated environment relative to the physical environment of the computer system, transitioning from displaying the simulated environment, including the virtual model, in the first viewing mode to displaying the simulated environment, including the virtual model, in a second viewing mode;
after performing the operation that corresponds to the first gesture, detecting, via the one or more attitude sensors, a second change in attitude of the portion of the computer system relative to the physical environment of the computer system; and
in response to detecting the second change in the attitude of the portion of the computer system:
in accordance with a determination that the first gesture met the mode change criteria, maintaining display of the virtual model in the simulated environment in the second viewing mode without displaying a live view, from the one or more cameras, of the physical environment of the computer system; and
in accordance with a determination that the first gesture did not meet the mode change criteria, continuing to display the virtual model in the simulated environment in the first viewing mode, wherein displaying the virtual model in the first viewing mode includes changing an appearance of the first virtual user interface object in the virtual model in response to the second change in attitude of the portion of the computer system relative to the physical environment of the computer system, so as to maintain the fixed spatial relationship between the first virtual user interface object and the physical environment of the computer system.