CPC H04N 21/23106 (2013.01) [H04N 21/2393 (2013.01); H04N 21/251 (2013.01); G06F 12/0871 (2013.01); G06F 2212/602 (2013.01); G06F 2212/604 (2013.01); G06F 2212/6028 (2013.01); H04L 67/568 (2022.05); H04L 67/62 (2022.05)] | 20 Claims |
1. A method comprising:
obtaining, by a processing system including at least one processor, a request for a first chunk of a first video;
determining, by the processing system, that the first chunk is not stored in a cache;
applying, by the processing system in response to the determining that the first chunk is not stored in the cache, a machine learning classifier to predict whether the first chunk will be re-requested within a time horizon, wherein the machine learning classifier is trained in accordance with a set of features associated with a plurality of chunks of a plurality of videos;
storing, by the processing system, the first chunk in the cache, when it is predicted via the machine learning classifier that the first chunk will be re-requested within the time horizon; and
evicting, by the processing system, at least a second chunk from the cache in accordance with an eviction process, wherein the eviction process includes:
identifying the second chunk as having a longest next request estimate as compared to next request estimates of a plurality of chunks in the cache, wherein for a given chunk of the plurality of chunks in the cache, the eviction process weights a next request estimate for the given chunk in accordance with a relative popularity of a track of the given chunk as compared to other tracks for a same video to which the given chunk belongs; and
evicting the second chunk from the cache when it is determined that the second chunk has the longest next request estimate.
|