US 12,439,111 B2
Automated video cropping
Apurvakumar Dilipkumar Kansara, San Jose, CA (US); Sanford Holsapple, Sherman Oaks, CA (US); Arica Westadt, Los Angeles, CA (US); Kunal Bisla, Pleasanton, CA (US); and Sameer Shah, Fremont, CA (US)
Assigned to Netflix, Inc., Los Gatos, CA (US)
Filed by Netflix, Inc., Los Gatos, CA (US)
Filed on Apr. 14, 2023, as Appl. No. 18/301,199.
Application 18/301,199 is a continuation of application No. 18/045,790, filed on Oct. 11, 2022, granted, now 11,700,404.
Application 18/045,790 is a continuation of application No. 17/063,445, filed on Oct. 5, 2020, granted, now 11,477,533.
Application 17/063,445 is a continuation of application No. 16/457,586, filed on Jun. 28, 2019, granted, now 10,834,465, issued on Nov. 10, 2020.
Prior Publication US 2023/0300392 A1, Sep. 21, 2023
Int. Cl. H04N 21/25 (2011.01); G06V 10/25 (2022.01); G06V 20/40 (2022.01); H04N 21/258 (2011.01); H04N 21/431 (2011.01); H04N 21/4402 (2011.01); H04N 21/4728 (2011.01); H04N 21/485 (2011.01)
CPC H04N 21/25825 (2013.01) [G06V 10/25 (2022.01); G06V 20/40 (2022.01); G06V 20/46 (2022.01); G06V 20/49 (2022.01); H04N 21/4318 (2013.01); H04N 21/440272 (2013.01); H04N 21/4728 (2013.01); H04N 21/4854 (2013.01); H04N 21/4858 (2013.01)] 20 Claims
OG exemplary drawing
 
1. A computer-implemented method comprising:
identifying one or more objects within a video scene;
determining a semantic context for at least one of the identified objects in the video scene,
generating a video crop that will include at least one specified object that is defined according to the determined semantic context, wherein the generated video crop includes a maximum number of objects that are part of the determined semantic context; and
applying the generated video crop to the video scene, tracking which video crops were generated and applied to one or more of the video scenes; and comparing at least one cropped version of the video scene to a user-cropped version of the same video scene to identify one or more differences in cropping.