US 11,886,489 B2
System and method of identifying visual objects
David Petrou, Brooklyn, NY (US); Matthew Bridges, New Providence, NJ (US); Shailesh Nalawadi, Morgan Hill, CA (US); Hartwig Adam, Marina del Rey, CA (US); Matthew R. Casey, San Francisco, CA (US); Hartmut Neven, Malibu, CA (US); and Andrew Harp, New York, NY (US)
Assigned to GOOGLE LLC, Mountain View, CA (US)
Filed by Google LLC, Mountain View, CA (US)
Filed on Mar. 24, 2023, as Appl. No. 18/189,776.
Application 18/189,776 is a continuation of application No. 17/157,022, filed on Jan. 25, 2021, granted, now 11,615,136.
Application 17/157,022 is a continuation of application No. 16/744,998, filed on Jan. 16, 2020, granted, now 10,902,055, issued on Jan. 26, 2021.
Application 16/744,998 is a continuation of application No. 16/563,375, filed on Sep. 6, 2019, granted, now 10,552,476, issued on Feb. 4, 2020.
Application 16/563,375 is a continuation of application No. 16/243,660, filed on Jan. 9, 2019, granted, now 10,409,855, issued on Sep. 10, 2019.
Application 16/243,660 is a continuation of application No. 15/247,542, filed on Aug. 25, 2016, granted, now 10,198,457, issued on Feb. 5, 2019.
Application 15/247,542 is a continuation of application No. 14/541,437, filed on Nov. 14, 2014, granted, now 9,442,957, issued on Sep. 13, 2016.
Application 14/541,437 is a continuation of application No. 13/693,665, filed on Dec. 4, 2012, granted, now 8,891,907, issued on Nov. 18, 2014.
Claims priority of provisional application 61/567,611, filed on Dec. 6, 2011.
Prior Publication US 2023/0237090 A1, Jul. 27, 2023
Int. Cl. G06F 16/583 (2019.01); G06F 16/535 (2019.01); G06F 16/50 (2019.01); G06F 16/9535 (2019.01); G06V 10/10 (2022.01); G06V 10/56 (2022.01); G06V 10/96 (2022.01); G06V 20/20 (2022.01); G06V 20/62 (2022.01); G06V 30/142 (2022.01); G06F 18/2413 (2023.01); H04N 23/00 (2023.01); G06V 30/19 (2022.01); G06F 16/9538 (2019.01); G06F 3/048 (2013.01)
CPC G06F 16/535 (2019.01) [G06F 3/048 (2013.01); G06F 16/50 (2019.01); G06F 16/5838 (2019.01); G06F 16/5846 (2019.01); G06F 16/9535 (2019.01); G06F 16/9538 (2019.01); G06F 18/24133 (2023.01); G06V 10/10 (2022.01); G06V 10/56 (2022.01); G06V 10/96 (2022.01); G06V 20/20 (2022.01); G06V 20/63 (2022.01); G06V 30/142 (2022.01); G06V 30/19173 (2022.01); H04N 23/00 (2023.01)] 20 Claims
OG exemplary drawing
 
1. A computer-implemented method, comprising:
displaying, by a user computing device comprising one or more processors, a video stream from a camera device associated with the user computing device, wherein the video stream comprises a plurality of image frames, and wherein the plurality of image frames collectively depict a plurality of objects;
providing, by the user computing device, one or more image frames of the plurality of image frames to an object recognition system;
in response to providing the one or more image frames, obtaining, by the user computing device from the object recognition system, annotation data descriptive of an object of the plurality of objects;
concurrently displaying, by the user computing device, at least one image frame of the plurality of image frames and the annotation data descriptive of the object;
receiving, by the user computing device from a user of the user computing device, a touch input associated with display of the annotation data descriptive of the object; and
responsive to receiving the touch input, displaying, by the user computing device, additional annotation data descriptive of the object.