| CPC G01C 21/3629 (2013.01) [G01C 21/3608 (2013.01); G01C 21/3644 (2013.01); G01C 21/3691 (2013.01); G06N 20/20 (2019.01)] | 20 Claims |

|
1. A method for generating context-aware audio navigation instructions in a vehicle, the method comprising:
training, by one or more processors, a machine learning model for identifying a plurality of audio navigation instruction parameters for a particular context using (i) a plurality of sensor signals in the vehicle, the sensor signals being descriptive of a context in which audio navigation instructions are provided, and (ii) an indication of whether a driver correctly responded to the audio navigation instructions, wherein the sensor signals descriptive of the context include at least one of: (i) visibility data indicative of weather conditions surrounding the vehicle or a time of day, (ii) audio data indicative of noise levels at or around the vehicle or (iii) traffic data indicative of traffic conditions surrounding the vehicle;
determining, by the one or more processors, a navigation instruction to be provided to a user;
generating, by the one or more processors, an audio navigation instruction based on the determined navigation instruction, including:
receiving one or more sensor signals, and
applying the machine learning model to the determined navigation instruction and the received one or more sensor signals to generate at least one audio navigation instruction parameter for the audio navigation instruction, wherein the audio navigation instruction is generated with at least one of: a high level of detail including a landmark as a location for a maneuver, or a low level of detail including an intersection as the location for the maneuver, wherein the low level of detail is lower than the high level of detail; and
providing the audio navigation instruction for presenting to the user via a speaker, wherein the audio navigation instruction is dynamically and automatically adapted to the context in real-time.
|