US 12,464,034 B2
Content completion detection for media content
Jonathan Bennett-James, Wales (GB); Bineet Kumar Singh, Karnataka (IN); and Nishant Kumar, Karnataka (IN)
Assigned to NAGRAVISION SARL, Cheseaux-sur-Lausanne (CH)
Filed by NAGRAVISION SARL, Cheseaux-sur-Lausanne (CH)
Filed on Jan. 26, 2024, as Appl. No. 18/423,488.
Application 18/423,488 is a continuation of application No. 17/543,377, filed on Dec. 6, 2021, granted, now 11,930,063.
Claims priority of provisional application 63/123,259, filed on Dec. 9, 2020.
Prior Publication US 2024/0244098 A1, Jul. 18, 2024
This patent is subject to a terminal disclaimer.
Int. Cl. H04L 65/613 (2022.01)
CPC H04L 65/613 (2022.05) 20 Claims
OG exemplary drawing
 
1. A method of processing media content, the method comprising:
monitoring a first channel, while a device is tuned to a second channel, to obtain a first media frame and a second media frame associated with the first channel, the second media frame occurring after the first media frame;
segmenting the first media frame to generate a first background region and one or more first foreground regions, wherein the first background region represents a background of a scene represented by the first media frame and wherein the one or more first foreground regions represents a foreground of the scene represented by the first media frame;
segmenting the second media frame to generate a second background region and one or more second foreground regions, wherein the second background region represents a background of a scene represented by the second media frame and wherein the one or more second foreground regions represents a foreground of the scene represented by the second media frame;
comparing at least one of the first background region and the second background region or the one or more first foreground regions and the one or more second foreground regions to generate a first tag indicating a change above a threshold has occurred in the second media frame relative to the first media frame;
processing the second media frame, using a machine-learning model, to generate a second tag indicating that media content of the second media frame is associated with a particular type of media content, wherein the machine-learning model is trained to determine a number of probabilities that a frame is associated with a number of respective classes; and
determining, based on a combined value based on the first tag and the second tag to keep the device tuned to the second channel.