US 12,328,565 B2
Methods and apparatus for rendering audio objects
Antonio Mateos Sole, Barcelona (ES); and Nicolas R. Tsingos, San Francisco, CA (US)
Assigned to Dolby Laboratories Licensing Corporation, San Francisco, CA (US); and Dolby International AB, Dublin (IE)
Filed by Dolby Laboratories Licensing Corporation, San Francisco, CA (US); and Dolby International AB, Dublin (IE)
Filed on Apr. 1, 2024, as Appl. No. 18/623,762.
Application 18/623,762 is a continuation of application No. 18/099,658, filed on Jan. 20, 2023, granted, now 11,979,733.
Application 18/099,658 is a continuation of application No. 17/329,094, filed on May 24, 2021, granted, now 11,564,051, issued on Jan. 24, 2023.
Application 17/329,094 is a continuation of application No. 16/868,861, filed on May 7, 2020, granted, now 11,019,447, issued on May 25, 2021.
Application 16/868,861 is a continuation of application No. 15/894,626, filed on Feb. 12, 2018, granted, now 10,652,684, issued on May 12, 2020.
Application 15/894,626 is a continuation of application No. 15/585,935, filed on May 3, 2017, granted, now 9,992,600, issued on Jun. 5, 2018.
Application 15/585,935 is a continuation of application No. 14/770,709, granted, now 9,674,630, issued on Jun. 6, 2017, previously published as PCT/US2014/022793, filed on Mar. 10, 2014.
Claims priority of provisional application 61/833,581, filed on Jun. 11, 2013.
Claims priority of application No. ES201330461 (ES), filed on Mar. 28, 2013.
Prior Publication US 2024/0334145 A1, Oct. 3, 2024
This patent is subject to a terminal disclaimer.
Int. Cl. H04R 5/00 (2006.01); H04S 3/00 (2006.01); H04S 5/00 (2006.01); H04S 7/00 (2006.01); H04R 5/02 (2006.01)
CPC H04S 7/30 (2013.01) [H04S 3/008 (2013.01); H04S 5/005 (2013.01); H04S 2400/01 (2013.01); H04S 2400/11 (2013.01); H04S 2400/13 (2013.01); H04S 2400/15 (2013.01)] 3 Claims
OG exemplary drawing
 
1. A method for rendering input audio including an audio object and metadata, wherein the metadata includes audio object size metadata and audio object position metadata corresponding to the audio object, the method comprising:
receiving the audio object size metadata and the audio object position metadata;
receiving content type metadata associated with the audio object, wherein the content type metadata indicates dialog associated with the audio object;
determining at least a virtual audio object based on the input audio, the audio object size metadata and the audio object position metadata;
determining a location of the virtual audio object based on at least one of the audio object size metadata and the audio object position metadata; and
rendering the audio object to the one or more speaker feeds based on the content type metadata, wherein the rendering also comprises rendering the virtual object based on at least the location of the virtual audio object.