US 12,287,623 B2
Methods and systems for automatically creating statistically accurate ergonomics data
Prasad Narasimha Akella, Palo Alto, CA (US); Ananya Honnedevasthana Ashok, Bangalore (IN); Zakaria Ibrahim Assoul, Oakland, CA (US); Krishnendu Chaudhury, Saratoga, CA (US); Sameer Gupta, Palo Alto, CA (US); and Ananth Uggirala, Mountain View, CA (US)
Filed by R4N63R Capital LLC, Wilmington, DE (US)
Filed on Nov. 5, 2018, as Appl. No. 16/181,168.
Claims priority of provisional application 62/581,541, filed on Nov. 3, 2017.
Prior Publication US 2019/0138676 A1, May 9, 2019
Int. Cl. G05B 19/418 (2006.01); G06F 9/448 (2018.01); G06F 9/48 (2006.01); G06F 11/07 (2006.01); G06F 11/34 (2006.01); G06F 16/22 (2019.01); G06F 16/23 (2019.01); G06F 16/2455 (2019.01); G06F 16/901 (2019.01); G06F 16/9035 (2019.01); G06F 16/904 (2019.01); G06F 30/20 (2020.01); G06F 30/23 (2020.01); G06F 30/27 (2020.01); G06N 3/008 (2023.01); G06N 3/04 (2023.01); G06N 3/044 (2023.01); G06N 3/045 (2023.01); G06N 3/08 (2023.01); G06N 3/084 (2023.01); G06N 7/01 (2023.01); G06N 20/00 (2019.01); G06Q 10/06 (2023.01); G06Q 10/0631 (2023.01); G06Q 10/0639 (2023.01); G06T 19/00 (2011.01); G06V 10/25 (2022.01); G06V 10/44 (2022.01); G06V 10/82 (2022.01); G06V 20/52 (2022.01); G06V 40/20 (2022.01); G09B 19/00 (2006.01); B25J 9/16 (2006.01); G01M 99/00 (2011.01); G05B 19/423 (2006.01); G05B 23/02 (2006.01); G06F 18/21 (2023.01); G06F 111/10 (2020.01); G06F 111/20 (2020.01); G06N 3/006 (2023.01); G06Q 10/083 (2023.01); G06Q 50/26 (2012.01); G16H 10/60 (2018.01)
CPC G05B 19/4183 (2013.01) [G05B 19/41835 (2013.01); G06F 9/4498 (2018.02); G06F 9/4881 (2013.01); G06F 11/0721 (2013.01); G06F 11/079 (2013.01); G06F 11/3452 (2013.01); G06F 16/2228 (2019.01); G06F 16/2365 (2019.01); G06F 16/24568 (2019.01); G06F 16/9024 (2019.01); G06F 16/9035 (2019.01); G06F 16/904 (2019.01); G06F 30/20 (2020.01); G06F 30/23 (2020.01); G06F 30/27 (2020.01); G06N 3/008 (2013.01); G06N 3/04 (2013.01); G06N 3/044 (2023.01); G06N 3/045 (2023.01); G06N 3/08 (2013.01); G06N 3/084 (2013.01); G06N 7/01 (2023.01); G06N 20/00 (2019.01); G06Q 10/06 (2013.01); G06Q 10/063112 (2013.01); G06Q 10/06316 (2013.01); G06Q 10/06393 (2013.01); G06Q 10/06395 (2013.01); G06Q 10/06398 (2013.01); G06T 19/006 (2013.01); G06V 10/25 (2022.01); G06V 10/454 (2022.01); G06V 10/82 (2022.01); G06V 20/52 (2022.01); G06V 40/20 (2022.01); G09B 19/00 (2013.01); B25J 9/1664 (2013.01); B25J 9/1697 (2013.01); G01M 99/005 (2013.01); G05B 19/41865 (2013.01); G05B 19/423 (2013.01); G05B 23/0224 (2013.01); G05B 2219/32056 (2013.01); G05B 2219/36442 (2013.01); G06F 18/217 (2023.01); G06F 2111/10 (2020.01); G06F 2111/20 (2020.01); G06N 3/006 (2013.01); G06Q 10/083 (2013.01); G06Q 50/26 (2013.01); G16H 10/60 (2018.01)] 15 Claims
OG exemplary drawing
 
1. A machine learning based ergonomics method comprising:
determining sensed activity information associated with a first actor and an activity space, wherein the sensed activity information includes at least one of one or more cycles, one or more processes, one or more actions, one or more sequences, one or more objects, and one or more parameters of a manufacturing operation and spatio-temporal data of the first actor, the spatio-temporal data comprising a location of the first actor and moments of work of the first actor, the moments of work comprising one or more of a weight, a torque, or a distance, the determining the sensed activity information comprising:
performing, with a frame feature extractor, a two-dimensional convolution operation on a video frame sensor stream to generate a two-dimensional array of feature vectors;
determining, with a region of interest detector, a dynamic region of interest in the video frame sensor stream, wherein the region of interest detector and the frame feature extractor share layers of a convolution neural network; and
processing an area of each video frame of the video frame sensor stream within the dynamic region of interest while discarding areas of the respective video frames outside the dynamic region of interest to determine the sensed activity information;
analyzing, by artificial intelligence, the determined sensed activity information for the first actor with respect to one or more ergonomic factors including work limit, work zone and hazard score; and
forwarding feedback based on the analyzing the determined activity information with respect to the one or more ergonomic factors, wherein:
the sensed activity information is received from sensors monitoring the activity space in real time, the sensors comprising a video sensor that produces the video frame sensor stream;
the determined activity information is analyzed in real time;
the feedback is forwarded in real time; and
the convolution neural network is applied to a plurality of sliding windows to determine the feedback with no computations repeated.