US 12,340,435 B2
System and method for three-dimensional scan of moving objects longer than the field of view
Ben R. Carey, Cambridge, MA (US); Andrew Parrett, Boston, MA (US); Yukang Liu, Natick, MA (US); and Gilbert Chiang, West Linn, OR (US)
Assigned to Cognex Corporation, Natick, MA (US)
Filed by Cognex Corporation, Natick, MA (US)
Filed on Sep. 5, 2023, as Appl. No. 18/242,192.
Application 18/242,192 is a continuation of application No. 17/179,294, filed on Feb. 18, 2021, granted, now 11,748,838.
Claims priority of provisional application 62/978,269, filed on Feb. 18, 2020.
Prior Publication US 2024/0177260 A1, May 30, 2024
This patent is subject to a terminal disclaimer.
Int. Cl. G06T 7/292 (2017.01); G06T 1/00 (2006.01); G06T 7/00 (2017.01); G06T 7/246 (2017.01); G06T 7/80 (2017.01); G06V 20/64 (2022.01)
CPC G06T 1/0014 (2013.01) [G06T 7/0004 (2013.01); G06T 7/251 (2017.01); G06T 7/292 (2017.01); G06T 7/85 (2017.01); G06V 20/64 (2022.01); G06V 2201/12 (2022.01)] 13 Claims
OG exemplary drawing
 
1. A 3D camera assembly having a Field of View (FOV) defining a usable region of interest (ROI), and comprising:
an area scan sensor, configured to acquire 3D images of an object moving through the FOV, wherein the object defines an overall length between opposing edges of the object longer than the FOV; and
at least one processor in communication with the area scan sensor and configured to:
receive a plurality of 3D images from the area scan sensor, the plurality of 3D images having a known amount of object movement between image acquisitions;
determine the overall length of the object based upon motion tracking information derived from the movement of the object through the FOV in combination with the plurality of 3D images;
receive a presence signal indicative of the object being located adjacent to the FOV, and, in response to the presence signal, determine if the object appears in more than one image as the object moves through the FOV;
in response to information related to features on the object, determine if the object is longer than the FOV as the object moves through the FOV;
determine a length LR of the usable ROI based upon the motion tracking information, wherein, when the length LR of the usable ROI is greater than the known amounts of object motion between the 3D images, including determining an overlapping of the 3D images so that the opposing edges are included in the overlapping; and
combine the information related to the features on the object from the plurality of 3D images to generate aggregate feature data, so as to provide a dimension of the object in a manner free of combining discrete, individual images into an overall image.