US 12,264,931 B1
Navigation assistance using spatial audio
Katherine S. Shigeoka, Santa Cruz, CA (US); Jonathan D. Sheaffer, San Jose, CA (US); and Andrew P. Bright, San Francisco, CA (US)
Assigned to Apple Inc., Cupertino, CA (US)
Filed by Apple Inc., Cupertino, CA (US)
Filed on Feb. 6, 2020, as Appl. No. 16/783,929.
Claims priority of provisional application 62/804,656, filed on Feb. 12, 2019.
Int. Cl. G01C 21/36 (2006.01); G02B 27/01 (2006.01); G06V 20/10 (2022.01); H04R 3/04 (2006.01); H04R 5/033 (2006.01); H04R 5/04 (2006.01)
CPC G01C 21/3629 (2013.01) [G02B 27/0172 (2013.01); G06V 20/10 (2022.01); H04R 3/04 (2013.01); H04R 5/033 (2013.01); H04R 5/04 (2013.01); G02B 2027/0138 (2013.01); G02B 2027/0178 (2013.01); H04R 2460/01 (2013.01)] 20 Claims
OG exemplary drawing
 
1. A method comprising:
capturing, using a plurality of microphones, ambient sound of an environment in which a user wearing a head-worn device is located as a plurality of microphone audio signals;
capturing, using a camera, a scene of the environment as image data;
processing the image data to detect an object contained therein;
determining that the object is beyond a threshold distance from a future portion of a predicted travel path of the user; and
in response to determining that the object is beyond the threshold distance of the future portion of the predicted travel path of the user, selecting an audio rendering mode in which an acoustic transparency function is at least partially activated which causes at least one speaker of a plurality of speakers to reproduce at least a portion of the ambient sound.