US 12,093,022 B2
Systems and methods for automatic job assignment
Prasad Narasimha Akella, Palo Alto, CA (US); Ananth Uggirala, Mountain View, CA (US); Krishnendu Chaudhury, Saratoga, CA (US); Sameer Gupta, Palo Alto, CA (US); and Sujay Venkata Krishna Narumanchi, Bangalore (IN)
Assigned to R4N63R CAPITAL LLC, Wilmington, DE (US)
Filed by R4N63R Capital LLC, Wilmington, DE (US)
Filed on Nov. 5, 2018, as Appl. No. 16/181,191.
Claims priority of provisional application 62/581,541, filed on Nov. 3, 2017.
Prior Publication US 2019/0138973 A1, May 9, 2019
Int. Cl. G05B 19/418 (2006.01); G06F 9/448 (2018.01); G06F 9/48 (2006.01); G06F 11/07 (2006.01); G06F 11/34 (2006.01); G06F 16/22 (2019.01); G06F 16/23 (2019.01); G06F 16/2455 (2019.01); G06F 16/901 (2019.01); G06F 16/9035 (2019.01); G06F 16/904 (2019.01); G06F 30/20 (2020.01); G06F 30/23 (2020.01); G06F 30/27 (2020.01); G06N 3/008 (2023.01); G06N 3/04 (2023.01); G06N 3/044 (2023.01); G06N 3/045 (2023.01); G06N 3/08 (2023.01); G06N 3/084 (2023.01); G06N 7/01 (2023.01); G06N 20/00 (2019.01); G06Q 10/06 (2023.01); G06Q 10/0631 (2023.01); G06Q 10/0639 (2023.01); G06T 19/00 (2011.01); G06V 10/25 (2022.01); G06V 10/44 (2022.01); G06V 10/82 (2022.01); G06V 20/52 (2022.01); G06V 40/20 (2022.01); G09B 19/00 (2006.01); B25J 9/16 (2006.01); G01M 99/00 (2011.01); G05B 19/423 (2006.01); G05B 23/02 (2006.01); G06F 18/21 (2023.01); G06F 111/10 (2020.01); G06F 111/20 (2020.01); G06N 3/006 (2023.01); G06Q 10/083 (2023.01); G06Q 50/26 (2012.01); G16H 10/60 (2018.01)
CPC G05B 19/4183 (2013.01) [G05B 19/41835 (2013.01); G06F 9/4498 (2018.02); G06F 9/4881 (2013.01); G06F 11/0721 (2013.01); G06F 11/079 (2013.01); G06F 11/3452 (2013.01); G06F 16/2228 (2019.01); G06F 16/2365 (2019.01); G06F 16/24568 (2019.01); G06F 16/9024 (2019.01); G06F 16/9035 (2019.01); G06F 16/904 (2019.01); G06F 30/20 (2020.01); G06F 30/23 (2020.01); G06F 30/27 (2020.01); G06N 3/008 (2013.01); G06N 3/04 (2013.01); G06N 3/044 (2023.01); G06N 3/045 (2023.01); G06N 3/08 (2013.01); G06N 3/084 (2013.01); G06N 7/01 (2023.01); G06N 20/00 (2019.01); G06Q 10/06 (2013.01); G06Q 10/063112 (2013.01); G06Q 10/06316 (2013.01); G06Q 10/06393 (2013.01); G06Q 10/06395 (2013.01); G06Q 10/06398 (2013.01); G06T 19/006 (2013.01); G06V 10/25 (2022.01); G06V 10/454 (2022.01); G06V 10/82 (2022.01); G06V 20/52 (2022.01); G06V 40/20 (2022.01); G09B 19/00 (2013.01); B25J 9/1664 (2013.01); B25J 9/1697 (2013.01); G01M 99/005 (2013.01); G05B 19/41865 (2013.01); G05B 19/423 (2013.01); G05B 23/0224 (2013.01); G05B 2219/32056 (2013.01); G05B 2219/36442 (2013.01); G06F 18/217 (2023.01); G06F 2111/10 (2020.01); G06F 2111/20 (2020.01); G06N 3/006 (2013.01); G06Q 10/083 (2013.01); G06Q 50/26 (2013.01); G16H 10/60 (2018.01)] 22 Claims
OG exemplary drawing
 
1. A computer implemented method of automatically determining work task assignments comprising:
using a computing device executing a machine learning engine, determining one or more cycles, one or more processes, one or more actions, one or more sequences, one or more objects and one or more parameters by convolution neural network deep learning from one or more video frame sensor streams at one or more manufacturing stations across an assembly line, the determining comprising:
performing, with a frame feature extractor, a two-dimensional convolution operation on the one or more video frame sensor streams to generate a two-dimensional array of feature vectors;
determining, with a region of interest detector unit, a dynamic region of interest in the one or more video frame sensor streams, wherein the region of interest detector unit and the frame feature extractor share layers of the convolution neural network;
processing, with a long short term memory, an area of the one or more video frame sensor streams within the dynamic region of interest without processing an area of the one or more video frame sensor streams outside the dynamic region of interest;
using the computing device executing the machine learning engine, identifying a plurality of work tasks performed by a plurality of human actors at the plurality of manufacturing stations based on the determined one or more cycles, one or more processes, one or more actions, one or more sequences, one or more objects and one or more parameters;
using the computing device, determining work task assignments of the plurality of work tasks identified from the one or more video frame sensor streams for the plurality of human actors;
outputting the work task assignments of the plurality of work tasks to the plurality of human actors for the performance of further cycles of the plurality of work tasks at the plurality of manufacturing stations across the assembly line;
using the computing device executing the machine learning engine, identifying changes of one or more of the plurality of human actors, the plurality of work tasks, and performance of the plurality of work tasks by the plurality of human actors, in real time;
using the computing device executing the machine learning engine, updating the work task assignments based on the identified changes of one or more of the plurality of human actors, the plurality of work tasks, and performance of the plurality of work tasks by the plurality of human actors; and
outputting the updated work task assignments to the plurality of human actors.