US 11,812,254 B2
Generating scene-aware audio using a neural network-based acoustic analysis
Zhenyu Tang, Greenbelt, MD (US); Timothy Langlois, Seattle, WA (US); Nicholas Bryan, Belmont, CA (US); and Dingzeyu Li, Seattle, WA (US)
Assigned to Adobe Inc., San Jose, CA (US)
Filed by Adobe Inc., San Jose, CA (US)
Filed on Nov. 1, 2021, as Appl. No. 17/515,918.
Application 17/515,918 is a continuation of application No. 16/674,924, filed on Nov. 5, 2019, granted, now 11,190,898.
Prior Publication US 2022/0060842 A1, Feb. 24, 2022
This patent is subject to a terminal disclaimer.
Int. Cl. H04S 7/00 (2006.01); G06N 3/08 (2023.01); G06N 3/04 (2023.01)
CPC H04S 7/305 (2013.01) [G06N 3/04 (2013.01); G06N 3/08 (2013.01); H04S 7/307 (2013.01); H04S 2400/11 (2013.01)] 20 Claims
OG exemplary drawing
 
1. A computer-implemented method comprising:
determining, by at least one processor and based on an audio recording within an environment, a first energy curve by utilizing an audio simulation model to estimate sound paths within the environment by simulating paths of a plurality of audio rays via ray tracing according to an environment geometry of the environment and a reverberation decay time of the environment within a three-dimensional virtual representation generated for the environment;
modifying, by the at least one processor, material parameters of materials of surfaces within the three-dimensional virtual representation of the environment based on a difference between the first energy curve and a second energy curve corresponding to the reverberation decay time of the environment; and
generating, by the at least one processor, an audio sample based on the environment geometry, the material parameters, and an environment equalization of the environment.