US 12,190,285 B2
Inventory tracking system and method that identifies gestures of subjects holding inventory items
Jordan E. Fisher, San Francisco, CA (US); Nicholas J. Locascio, San Francisco, CA (US); and Michael S. Suswal, San Francisco, CA (US)
Assigned to Standard Cognition, Corp., San Francisco, CA (US)
Filed by STANDARD COGNITION, CORP, San Francisco, CA (US)
Filed on Jan. 19, 2022, as Appl. No. 17/579,465.
Application 17/579,465 is a continuation of application No. 16/519,660, filed on Jul. 23, 2019, granted, now 11,250,376.
Application 16/519,660 is a continuation in part of application No. 15/945,473, filed on Apr. 4, 2018, granted, now 10,474,988, issued on Nov. 12, 2019.
Application 15/945,473 is a continuation in part of application No. 15/907,112, filed on Feb. 27, 2018, granted, now 10,133,933, issued on Nov. 20, 2018.
Application 15/907,112 is a continuation in part of application No. 15/847,796, filed on Dec. 19, 2017, granted, now 10,055,853, issued on Aug. 21, 2018.
Claims priority of provisional application 62/703,785, filed on Jul. 26, 2018.
Claims priority of provisional application 62/542,077, filed on Aug. 7, 2017.
Prior Publication US 2022/0147913 A1, May 12, 2022
Int. Cl. G06Q 10/087 (2023.01); G06N 3/045 (2023.01); G06N 3/08 (2023.01); G06T 7/292 (2017.01); G06T 7/70 (2017.01); G06V 10/764 (2022.01); G06V 10/82 (2022.01); G06V 20/52 (2022.01); G06V 40/20 (2022.01); H04N 23/90 (2023.01)
CPC G06Q 10/087 (2013.01) [G06N 3/045 (2023.01); G06N 3/08 (2013.01); G06T 7/292 (2017.01); G06T 7/70 (2017.01); G06V 10/764 (2022.01); G06V 10/82 (2022.01); G06V 20/52 (2022.01); G06V 40/28 (2022.01); H04N 23/90 (2023.01); G06T 2207/10016 (2013.01); G06T 2207/20084 (2013.01); G06T 2207/30196 (2013.01)] 20 Claims
OG exemplary drawing
 
1. A method for identifying gestures in an area of real space, the method including:
using a plurality of sensors to produce respective sequences of frames of corresponding fields of view in the area of real space;
detecting subjects in the area of real space;
during a first production phase that implements a first inference engine that is pre-trained to operate in a first production mode, switching the first inference engine from a first training mode to the first production mode to identify inventory items carried by the detected subjects in the sequence of frames, wherein the first production phase includes using a sequence of frames produced by a corresponding sensor in the plurality of sensors to identify the inventory items carried by the detected subjects;
during a second production phase that implements a second inference engine that is pre-trained to operate in a second production mode, switching the second inference engine from a second training mode to the second production mode to identify gestures of the detected subjects carrying the inventory items, wherein the second production phase includes using outputs of the first inference engine over a period of time to identify gestures of the detected subjects carrying the inventory items; and
storing the identified gestures in a database.