| CPC H04S 7/40 (2013.01) [G06F 3/167 (2013.01); H04R 1/326 (2013.01); H04R 3/005 (2013.01); H04S 2400/11 (2013.01); H04S 2400/15 (2013.01)] | 18 Claims |

|
1. A method, comprising:
receiving sound signals from sound sources in an environment into a microphone array;
performing sound source separation to separate the sound sources;
performing sound source localization to locate each of the sound sources;
generating sound information based on the sound separation and the source localization;
presenting a layered visual representation of each of the sound sources in a user interface of a device, wherein the layered visual representation of each of the sound sources comprises a first layer of information about the sound source and a second layer of information about the sound source, the first and second layers of information being presented in response to receipt of user input;
presenting haptic guidance to a user, wherein the haptic guidance includes a haptic response, wherein the haptic response includes a vibration configured to indicate at least a location of the sound source relative to the user, and wherein different vibration patterns correspond to different locations; and
presenting visual guidance to the user in the user interface regarding each of the sound sources in the environment, the visual guidance including the sound information and providing situational awareness to the user in the environment,
wherein the sound information includes a sound pressure level (SPL)/loudness of each sound source, a direction to each sound source, a distance to each sound source, and/or a trajectory of each sound source,
wherein a size of a graphical representation of each sound source corresponds to a magnitude of sound signals therefrom,
wherein graphical representations of the sound sources are overlaid onto video frames, and
wherein the first and second layers of information about each sound source are overlaid over a respective graphical representation of each sound source.
|