US 12,141,902 B2
System and methods for resolving audio conflicts in extended reality environments
Susanto Sen, Karnataka (IN); and Ankur Anil Aher, Maharashtra (IN)
Assigned to Adeia Guides Inc., San Jose, CA (US)
Filed by Adeia Guides Inc., San Jose, CA (US)
Filed on Apr. 22, 2022, as Appl. No. 17/727,512.
Application 17/727,512 is a continuation of application No. 16/917,853, filed on Jun. 30, 2020, granted, now 11,341,697.
Prior Publication US 2023/0010548 A1, Jan. 12, 2023
Int. Cl. G06T 11/00 (2006.01); G06T 11/60 (2006.01); G10L 15/22 (2006.01); G10L 15/26 (2006.01); G10L 21/10 (2013.01)
CPC G06T 11/60 (2013.01) [G10L 15/22 (2013.01); G10L 15/26 (2013.01); G10L 21/10 (2013.01)] 17 Claims
OG exemplary drawing
 
1. A method comprising:
receiving, using control circuitry, audio information from an audio source that corresponds to a visual user representation in an extended reality environment;
determining, based on analysis of other audio sources in the extended reality environment, to translate the audio information into translated text;
determining a first area adjacent the visual user representation, wherein the first area is determined such that the first area is free of an extended reality object of the extended reality environment and is at a first position relative to the visual user representation;
generating display, at a user interface, of the translated text in the first area adjacent to the visual user representation;
based at least in part on detecting a first movement of the visual user representation in the extended reality environment, determining a second area adjacent the visual user representation, wherein the second area is distinct from the first area and is determined such that the second area is free of the extended reality object of the extended reality environment and is at a second position relative to the visual user representation;
generating display, at the user interface, of the translated text in the second area adjacent the visual user representation;
based at least in part on detecting a second movement of the visual user representation in the extended reality environment, determining a third area adjacent the visual user representation, wherein the third area is distinct from the second area and is determined such that the third area is free of the extended reality object of the extended reality environment and is at the first position relative to the visual user representation; and
generating display, at the user interface, of the translated text in the third area adjacent the visual user representation.