CPC G06T 11/60 (2013.01) [G10L 15/22 (2013.01); G10L 15/26 (2013.01); G10L 21/10 (2013.01)] | 17 Claims |
1. A method comprising:
receiving, using control circuitry, audio information from an audio source that corresponds to a visual user representation in an extended reality environment;
determining, based on analysis of other audio sources in the extended reality environment, to translate the audio information into translated text;
determining a first area adjacent the visual user representation, wherein the first area is determined such that the first area is free of an extended reality object of the extended reality environment and is at a first position relative to the visual user representation;
generating display, at a user interface, of the translated text in the first area adjacent to the visual user representation;
based at least in part on detecting a first movement of the visual user representation in the extended reality environment, determining a second area adjacent the visual user representation, wherein the second area is distinct from the first area and is determined such that the second area is free of the extended reality object of the extended reality environment and is at a second position relative to the visual user representation;
generating display, at the user interface, of the translated text in the second area adjacent the visual user representation;
based at least in part on detecting a second movement of the visual user representation in the extended reality environment, determining a third area adjacent the visual user representation, wherein the third area is distinct from the second area and is determined such that the third area is free of the extended reality object of the extended reality environment and is at the first position relative to the visual user representation; and
generating display, at the user interface, of the translated text in the third area adjacent the visual user representation.
|