US 11,877,143 B2
Parameterized modeling of coherent and incoherent sound
Nikunj Raghuvanshi, Redmond, WA (US); Andrew Stewart Allen, San Diego, CA (US); and John Michael Snyder, Redmond, WA (US)
Assigned to Microsoft Technology Licensing, LLC, Redmond, WA (US)
Filed by Microsoft Technology Licensing, LLC, Redmond, WA (US)
Filed on Dec. 30, 2021, as Appl. No. 17/565,878.
Claims priority of provisional application 63/285,873, filed on Dec. 3, 2021.
Prior Publication US 2023/0179945 A1, Jun. 8, 2023
Int. Cl. H04S 7/00 (2006.01)
CPC H04S 7/305 (2013.01) [H04S 7/303 (2013.01); H04S 2400/01 (2013.01); H04S 2420/01 (2013.01)] 20 Claims
OG exemplary drawing
 
1. A method comprising:
generating directional impulse responses for a scene, the directional impulse responses corresponding to sound departing from multiple sound source locations and arriving at multiple listener locations in the scene;
processing the directional impulse responses to obtain coherent sound signals and incoherent sound signals that at least partially overlap in time with the coherent sound signals;
encoding first perceptual acoustic parameters from the coherent sound signals and second perceptual acoustic parameters from the incoherent sound signals; and
outputting the encoded first perceptual acoustic parameters and the encoded second perceptual acoustic parameters.