US 12,293,602 B1
Detection and mitigation of unsafe behaviors using computer vision
Kedar Shriram Prabhudesai, Apex, NC (US); Hardi Desai, Dunellen, NJ (US); Jonathan James McElhinney, Glasgow (GB); Jonathan Lee Walker, Raleigh, NC (US); Sanjeev Shyam Heda, Kennesaw, GA (US); Andrey Matveenko, Phuket (TH); Varunraj Valsaraj, Cary, NC (US); and Rik Peter de Ruiter, Amersfoort (NL)
Assigned to SAS INSTITUTE INC., Cary, NC (US)
Filed by SAS Institute, Inc., Cary, NC (US)
Filed on Oct. 23, 2024, as Appl. No. 18/924,261.
Claims priority of provisional application 63/642,860, filed on May 5, 2024.
Int. Cl. G06K 9/00 (2022.01); F16P 3/14 (2006.01); G06Q 50/26 (2012.01); G06T 7/00 (2017.01); G06T 7/20 (2017.01); G06T 7/254 (2017.01); G06T 7/70 (2017.01); G06V 10/26 (2022.01); G06V 10/56 (2022.01); G06V 10/70 (2022.01); G06V 10/75 (2022.01); G06V 10/764 (2022.01); G06V 20/40 (2022.01); G06V 20/52 (2022.01); G06V 40/10 (2022.01); G06V 40/20 (2022.01); G08B 21/02 (2006.01)
CPC G06V 40/10 (2022.01) [F16P 3/142 (2013.01); G06Q 50/265 (2013.01); G06T 7/0004 (2013.01); G06T 7/20 (2013.01); G06T 7/254 (2017.01); G06T 7/70 (2017.01); G06V 10/26 (2022.01); G06V 10/56 (2022.01); G06V 10/70 (2022.01); G06V 10/759 (2022.01); G06V 10/764 (2022.01); G06V 10/768 (2022.01); G06V 20/41 (2022.01); G06V 20/44 (2022.01); G06V 20/52 (2022.01); G06V 40/20 (2022.01); G08B 21/02 (2013.01); G06T 2207/30196 (2013.01); G06T 2207/30232 (2013.01)] 31 Claims
OG exemplary drawing
 
1. A computer-implemented method comprising:
accessing, by one or more processors, video data collected from one or more image sensors, the video data showing a region of interest proximate to a machine;
executing, by the one or more processors, an object detection model to detect that a person is within the region of interest proximate to the machine based on the video data;
detecting, by the one or more processors, a motion status of a component of the machine by:
sampling a first video frame and a second video frame from the video data associated with the component at a predetermined frame rate;
comparing the first video frame and the second video frame to obtain a pixel difference within a pre-defined polygon region of the first video frame and the second video frame; and
in response to determining that the pixel difference is greater than a predetermined pixel change threshold, flagging the component as moving;
executing, by the one or more processors, a pose estimation model on the video data to estimate a pose of the person with respect to the machine;
identifying, by the one or more processors, a personnel type of the person in the region of interest by:
defining a polygon based on multiple pixel locations corresponding to left and right shoulders and left and right hips in a video frame of the video data;
comparing a width-to-height ratio of the polygon to a predetermined ratio threshold;
in response to determining that the width-to-height ratio is less than the predetermined ratio threshold, extending the polygon in a direction to obtain an extended polygon;
extracting an image from the video frame based on the extended polygon;
converting the image into a color space;
detecting, in the color space, a color type corresponding to the personnel type; and
determining the personnel type of the person based on one or more properties of the color type satisfying one or more predetermined thresholds;
detecting, by the one or more processors, a safety rule violation based on the pose of the person with respect to the machine, the motion status of the machine, and the personnel type of the person; and
in response to detecting the safety rule violation, transmitting, by the one or more processors, a signal to a controller of the machine.