US 11,863,869 B1
Event detection using motion extracted image comparison
Lorenzo Sorgi, Seattle, WA (US); and Eliezer Rosengaus, Seattle, WA (US)
Assigned to Amazon Technologies, Inc., Seattle, WA (US)
Filed by Amazon Technologies, Inc., Seattle, WA (US)
Filed on Apr. 29, 2021, as Appl. No. 17/244,655.
Application 17/244,655 is a continuation of application No. 16/698,547, filed on Nov. 27, 2019, granted, now 10,999,506.
Application 16/698,547 is a continuation of application No. 15/831,253, filed on Dec. 4, 2017, granted, now 10,498,963, issued on Dec. 3, 2019.
This patent is subject to a terminal disclaimer.
Int. Cl. H04N 23/68 (2023.01); G06T 5/50 (2006.01); G06T 5/00 (2006.01); G06T 7/13 (2017.01); G06T 7/215 (2017.01); H04N 23/741 (2023.01); H04N 25/58 (2023.01)
CPC H04N 23/683 (2023.01) [G06T 5/009 (2013.01); G06T 5/50 (2013.01); G06T 7/13 (2017.01); G06T 7/215 (2017.01); H04N 23/741 (2023.01); H04N 25/58 (2023.01); G06T 2207/20208 (2013.01)] 20 Claims
OG exemplary drawing
 
1. A method, comprising:
generating a first motion extracted image by at least:
comparing at least a first portion of a first image data generated by an imaging component at a first time with at least a second portion of a second image data generated by the imaging component at a second time to determine a first pixel of the first image data corresponding to an object that has moved in position between the first time and the second time;
extracting the first pixel from the first image data; and
including at least a first portion of a baseline image to fill the extracted first pixel;
generating a second motion extracted image by at least:
comparing at least a third portion of a third image data generated by the imaging component at a third time with at least a fourth portion of a fourth image data generated by the imaging component at a fourth time to determine a second pixel of the third image data corresponding to an object that has moved in position between the third time and the fourth time;
extracting the second pixel from the third image data; and
including at least a second portion of the baseline image to fill the extracted second pixel;
comparing the first motion extracted image and the second motion extracted image to determine a difference between the first motion extracted image and the second motion extracted image, wherein the difference is determined based at least in part on a depth value difference between one or more pixels of the first motion extracted image and one or more pixels of the second motion extracted image; and
in response to determining the depth value difference, generating an event notification indicative of an event.