US 12,149,762 B2
Fragment-aligned audio coding
Bernd Czelhan, Happurg (DE); Harald Fuchs, Roettenbach (DE); Ingo Hofmann, Nuremberg (DE); Herbert Thoma, Erlangen (DE); and Stephan Schreiner, Birgland (DE)
Assigned to Fraunhofer-Gesellschaft zur Foerderung der angewandten Forschung e.V., Munich (DE)
Filed by Fraunhofer-Gesellschaft zur Foerderung der angewandten Forschung e.V., Munich (DE)
Filed on Aug. 9, 2023, as Appl. No. 18/447,279.
Application 18/447,279 is a division of application No. 17/541,188, filed on Dec. 2, 2021, granted, now 11,765,415.
Application 16/784,763 is a division of application No. 15/697,215, filed on Sep. 6, 2017, granted, now 10,595,066, issued on Mar. 17, 2020.
Application 17/541,188 is a continuation of application No. 16/784,763, filed on Feb. 7, 2020, granted, now 11,218,754, issued on Jan. 4, 2022.
Application 15/697,215 is a continuation of application No. PCT/EP2016/054916, filed on Mar. 8, 2016.
Claims priority of application No. 15158317 (EP), filed on Mar. 9, 2015.
Prior Publication US 2023/0388565 A1, Nov. 30, 2023
This patent is subject to a terminal disclaimer.
Int. Cl. H04N 21/242 (2011.01); G10L 19/16 (2013.01); G10L 25/57 (2013.01); H04N 19/40 (2014.01); H04N 21/233 (2011.01); H04N 21/234 (2011.01); H04N 21/2343 (2011.01); H04N 21/845 (2011.01); G10L 21/055 (2013.01)
CPC H04N 21/242 (2013.01) [G10L 19/167 (2013.01); G10L 25/57 (2013.01); H04N 19/40 (2014.11); H04N 21/2335 (2013.01); H04N 21/23424 (2013.01); H04N 21/23439 (2013.01); H04N 21/8456 (2013.01); G10L 21/055 (2013.01)] 7 Claims
OG exemplary drawing
 
1. A method for decoding audio content from an encoded data stream,
wherein the encoded data stream comprises encoded representations of temporal fragments of the audio content, each of which has encoded thereinto a respective temporal fragment of the audio content in units of audio frames temporally aligned to a beginning of the respective temporal fragment so that the beginning of the respective temporal fragment coincides with a beginning of a first audio frame of the audio frames,
wherein the method comprises
decoding reconstructed versions of the temporal fragments of the audio content from the encoded representations of the temporal fragments; and
joining, for playout, the reconstructed versions of the temporal fragments of the audio content together by
truncating the reconstructed version of a predetermined temporal fragment at a portion of a trailing audio frame of the audio frames in units of which the predetermined temporal fragment is coded into the encoded representation of the predetermined temporal fragment, which temporally exceeds a trailing end of the predetermined temporal fragment,
determining the portion of the trailing audio frame on the basis of truncation information in the encoded data stream, wherein the truncation information comprises
a frame length value indicating a temporal length of the audio frames in units of which the predetermined temporal fragment is coded into the encoded representation of the predetermined temporal fragment, and a fragment length value indicating a temporal length of the predetermined temporal fragment from the beginning of the reconstructed version of the predetermined fragment to the fragment boundary with which the beginning of the reconstructed version of the succeeding temporal fragment coincides, and/or
a truncation length value indicating a temporal length of the portion of the trailing audio frame or the difference between the temporal length of the portion of the trailing audio frame and the temporal length of the trailing audio frame.