CPC G05B 19/4183 (2013.01) [G05B 19/41835 (2013.01); G06F 9/4498 (2018.02); G06F 9/4881 (2013.01); G06F 11/0721 (2013.01); G06F 11/079 (2013.01); G06F 11/3452 (2013.01); G06F 16/2228 (2019.01); G06F 16/2365 (2019.01); G06F 16/24568 (2019.01); G06F 16/9024 (2019.01); G06F 16/9035 (2019.01); G06F 16/904 (2019.01); G06F 30/20 (2020.01); G06F 30/23 (2020.01); G06F 30/27 (2020.01); G06N 3/008 (2013.01); G06N 3/04 (2013.01); G06N 3/044 (2023.01); G06N 3/045 (2023.01); G06N 3/08 (2013.01); G06N 3/084 (2013.01); G06N 7/01 (2023.01); G06N 20/00 (2019.01); G06Q 10/06 (2013.01); G06Q 10/063112 (2013.01); G06Q 10/06316 (2013.01); G06Q 10/06393 (2013.01); G06Q 10/06395 (2013.01); G06Q 10/06398 (2013.01); G06T 19/006 (2013.01); G06V 10/25 (2022.01); G06V 10/454 (2022.01); G06V 10/82 (2022.01); G06V 20/52 (2022.01); G06V 40/20 (2022.01); G09B 19/00 (2013.01); B25J 9/1664 (2013.01); B25J 9/1697 (2013.01); G01M 99/005 (2013.01); G05B 19/41865 (2013.01); G05B 19/423 (2013.01); G05B 23/0224 (2013.01); G05B 2219/32056 (2013.01); G05B 2219/36442 (2013.01); G06F 18/217 (2023.01); G06F 2111/10 (2020.01); G06F 2111/20 (2020.01); G06N 3/006 (2013.01); G06Q 10/083 (2013.01); G06Q 50/26 (2013.01); G16H 10/60 (2018.01)] | 29 Claims |
1. A method comprising:
accessing respective information associated with monitoring a physical movement of a physical object by one or both of a first actor and a second actor performing an activity, the information including sensed activity information corresponding to a condition/situation associated with movement by the first actor or the second actor, wherein the physical movement of the physical object is associated with manufacture of a product and wherein the sensed activity information is determined in real time by artificial intelligence and includes one or more cycles, one or more processes, one or more tasks, one or more sequences, one or more objects and one or more parameters, in one or more video frame streams;
creating a multi-dimensional virtual activity space comprising at least three dimensions based on the information, the virtual activity space comprising a virtual representation of the activity;
comparing the activity performed by one or both of the first actor and the second actor with the virtual representation of the activity in real time by overlaying the activity performed by the one or both of the first actor and the second actor on the virtual representation of the activity;
analyzing the information in a computer, the analyzing including automated artificial intelligence analysis of the activity associated with the condition/situation, wherein the automated artificial intelligence analysis includes utilizing machine learning back-end unit processes in the analysis to identify a modification of one or more spacings for the first actor and the second actor performing the activity based on reach, motion, action, sequence, and safety zone for the first actor and the second actor; and
forwarding respective feedback based on results of the analysis to one or both of the first actor and the second actor to avoid the condition/situation, the feedback including adjusting the movement of the first actor or the second actor based on the modification of one or more spacings for the first actor and the second actor performing the activity to avoid interfering with the movement of one another, if the condition/situation is detrimental to the manufacture of the product.
|