US 12,422,916 B2
Method and device for dynamic sensory and input modes based on contextual state
Bryce L. Schmidtchen, San Francisco, CA (US); Brian W. Temple, Santa Clara, CA (US); and Devin W. Chalmers, Oakland, CA (US)
Assigned to APPLE INC., Cupertino, CA (US)
Appl. No. 18/291,979
Filed by Apple Inc., Cupertino, CA (US)
PCT Filed Jul. 13, 2022, PCT No. PCT/US2022/037010
§ 371(c)(1), (2) Date Jan. 25, 2024,
PCT Pub. No. WO2023/009318, PCT Pub. Date Feb. 2, 2023.
Claims priority of provisional application 63/325,148, filed on Mar. 30, 2022.
Claims priority of provisional application 63/226,981, filed on Jul. 29, 2021.
Prior Publication US 2024/0219998 A1, Jul. 4, 2024
Int. Cl. G06F 3/01 (2006.01); G06T 19/20 (2011.01)
CPC G06F 3/011 (2013.01) [G06T 19/20 (2013.01); G06T 2200/24 (2013.01); G06T 2219/2016 (2013.01)] 25 Claims
OG exemplary drawing
 
1. A method comprising:
at a computing system including non-transitory memory and one or more processors, wherein the computing system is communicatively coupled to a display device and one or more input devices:
obtaining a first characterization vector including at least a first location, a first motion state, a first body pose, and a first gaze direction;
while in a first contextual state, presenting extended reality (XR) content, via the display device, according to a first presentation mode and enabling a first set of input modes to be directed to the XR content, wherein the first contextual state is based on the first characterization vector;
detecting a change from the first contextual state to a second contextual state; and
in response to detecting the change from the first contextual state to the second contextual state, presenting, via the display device, the XR content according to a second presentation mode different from the first presentation mode and enabling a second set of input modes to be directed to the XR content that is different from the first set of input modes.