US 11,863,963 B2
Augmented reality spatial audio experience
Ilteris Canberk, Marina Del Rey, CA (US); and Shin Hwun Kang, Los Angeles, CA (US)
Assigned to Snap Inc., Santa Monica, CA (US)
Filed by Snap Inc., Santa Monica, CA (US)
Filed on Nov. 17, 2021, as Appl. No. 17/528,713.
Claims priority of provisional application 63/122,349, filed on Dec. 7, 2020.
Prior Publication US 2022/0182777 A1, Jun. 9, 2022
Int. Cl. H04S 7/00 (2006.01); G02B 27/01 (2006.01)
CPC H04S 7/303 (2013.01) [G02B 27/017 (2013.01); G02B 2027/0138 (2013.01); G02B 2027/0178 (2013.01); H04S 2400/11 (2013.01)] 16 Claims
OG exemplary drawing
 
1. An eyewear device comprising:
a speaker system;
at least one image sensor having a field of view;
a display having a viewing area corresponding to the field of view;
a support structure configured to be head-mounted on a user, the support structure supporting the speaker system and the at least one image sensor; and
a processor, a memory, and programming in said memory, wherein execution of said programming by said processor configures the eyewear device to:
capture, with the at least one image sensor, image information of an environment surrounding the eyewear device;
identify an object location within the environment;
associate a virtual object with the identified object location;
monitor position of the eyewear device with respect to the virtual object responsive to the captured image information;
determine when the object location is within the viewing area of the display;
present, on the display, video signals including the virtual object in the object location responsive to the monitored position when the identified object location is determined to be within the viewing area;
randomly select an object type for the virtual object from at least a first object type and second object type, the first object type associated with a first set of animation states and the second object type associated with a second set of animation states;
wherein to present the virtual object the eyewear device is configured to present the first set of animation states when the first object type is selected and to present the second set of animation states when the second object type is selected;
wherein the first object type is associated with a positive score and the second object type is associated with a negative score and wherein execution of the programming by said processor further configures the eyewear device to:
maintain a tally for the user;
detect when the eyewear device is within a predefined threshold of the object location; and
increase the tally by the positive score when the virtual object has the first object type and decrease the tally by the negative score when the virtual object has the second object type; and
present audio signals, with the speaker system, responsive to the monitored position to alert the user that the identified object is in the environment.