US 12,256,211 B2
Immersive augmented reality experiences using spatial audio
Ilteris Canberk, Marina Del Rey, CA (US); Shin Hwun Kang, Los Angeles, CA (US); and James Powderly, Venice, CA (US)
Assigned to Snap Inc., Santa Monica, CA (US)
Filed by Snap Inc., Santa Monica, CA (US)
Filed on May 16, 2023, as Appl. No. 18/198,055.
Application 18/198,055 is a continuation of application No. 17/342,031, filed on Jun. 8, 2021, granted, now 11,689,877.
Application 17/342,031 is a continuation of application No. 16/836,363, filed on Mar. 31, 2020, granted, now 11,089,427, issued on Aug. 10, 2021.
Prior Publication US 2023/0292077 A1, Sep. 14, 2023
This patent is subject to a terminal disclaimer.
Int. Cl. H04S 7/00 (2006.01); G02B 27/01 (2006.01); G06T 19/00 (2011.01); H04W 4/029 (2018.01)
CPC H04S 7/303 (2013.01) [G02B 27/0176 (2013.01); G06T 19/006 (2013.01); H04W 4/029 (2018.02); G02B 2027/014 (2013.01); H04R 2499/15 (2013.01); H04S 2400/11 (2013.01)] 17 Claims
OG exemplary drawing
 
1. A method for use with a device configured to be head mounted on a user, the device comprising a processor, at least one image sensor, at least one speaker that produces at least three directional audio zones, and a wireless communication component that is operatively connected to a server system through a network, the ver system storing previously obtained information, the method comprising:
capturing, using the at least one image sensor, images in an environment of the device;
identifying at least one of an object or feature within the captured images;
retrieving the previously obtained information from the server system;
storing the previously obtained information in a memory of the device;
orienting the device with respect to at least one object or feature in the environment by identifying a match between the at least one object or feature in the captured images and at least one stored object or feature in the previously obtained information including position information for location points associated with the at least one object or feature;
determining a position of the device within the environment with respect to a first matched object or feature;
determining a target location within the environment that may be associated with the first matched object or feature;
determining a current orientation of the device with respect to the target location; and
selectively emitting audio signals from the at least one speaker in respective directional audio zones responsive to the current orientation to guide the user to the target location.