US 12,243,117 B2
Post-detection refinement based on edges and multi-dimensional corners
Jinze Yu, Tokyo (JP); and Jose Jeronimo Moreira Rodrigues, Tokyo (JP)
Assigned to Mujin, Inc., Tokyo (JP)
Filed by MUJIN, Inc., Tokyo (JP)
Filed on Nov. 15, 2023, as Appl. No. 18/510,521.
Application 18/510,521 is a continuation of application No. 17/806,379, filed on Jun. 10, 2022, granted, now 11,850,760.
Application 17/806,379 is a continuation of application No. 16/824,680, filed on Mar. 19, 2020, granted, now 11,389,965, issued on Jul. 19, 2022.
Claims priority of provisional application 62/879,359, filed on Jul. 26, 2019.
Prior Publication US 2024/0157566 A1, May 16, 2024
This patent is subject to a terminal disclaimer.
Int. Cl. G06T 1/00 (2006.01); B25J 9/16 (2006.01); B25J 19/02 (2006.01); B65G 61/00 (2006.01); G06F 18/23 (2023.01); G06T 7/13 (2017.01); G06T 7/174 (2017.01); G06T 7/33 (2017.01); G06T 7/73 (2017.01); G06V 10/44 (2022.01); G06V 20/52 (2022.01); G06V 20/64 (2022.01)
CPC G06T 1/0014 (2013.01) [B25J 9/1697 (2013.01); B25J 19/023 (2013.01); B65G 61/00 (2013.01); G06F 18/23 (2023.01); G06T 7/13 (2017.01); G06T 7/174 (2017.01); G06T 7/33 (2017.01); G06T 7/73 (2017.01); G06V 10/443 (2022.01); G06V 10/457 (2022.01); G06V 20/52 (2022.01); G06V 20/647 (2022.01); B65G 2201/025 (2013.01); G06T 2207/20212 (2013.01)] 20 Claims
OG exemplary drawing
 
1. A tangible, non-transient computer-readable medium having instructions stored thereon that, when executed by a processor, cause the processor to perform a method, the method comprising:
obtaining a first image and a second image representative of an environment having an object located therein, wherein the first and second images use different imaging characteristics to depict the object;
based on the first image, computing an estimated location of the object;
identifying one or more object edges based on analyzing the first image, wherein the identified object edges correspond to the object;
identifying a feature location based on comparing the one or more object edges and the second image, the feature location representing a location of a feature in the second image that corresponds to the one or more object edges; and
generating an object detection result based on the estimated location and the feature location.
 
11. A method of operating a robotic system, the method comprising:
obtaining a first image and a second image representative of an environment having an object located therein, wherein the first and second images use different imaging characteristics to depict the object;
based on the first image, computing an estimated location of the object;
identifying one or more object edges based on analyzing the first image, wherein the identified object edges correspond to the object;
identifying a feature location based on comparing the one or more object edges and the second image, the feature location representing a location of a feature in the second image that corresponds to the one or more object edges; and
generating an object detection result based on the estimated location and the feature location.
 
16. A robotic system comprising:
at least one processor; and
at least one memory device coupled to the at least one processor, the at least one memory having instructions stored thereon, that when executed by the processor, cause the processor to:
obtain a first image and a second image representative of an environment having an object located therein, wherein the first and second images use different imaging characteristics to depict the object;
based on the first image, compute an estimated location of the object;
identify one or more object edges based on analyzing the first image, wherein the identified object edges correspond to the object;
identify a feature location based on comparing the one or more object edges and the second image, the feature location representing a location of a feature in the second image that corresponds to the one or more object edges; and
generate an object detection result based on the estimated location and the feature location.