US 11,850,760 B2
Post-detection refinement based on edges and multi-dimensional corners
Jinze Yu, Tokyo (JP); and Jose Jeronimo Moreira Rodrigues, Tokyo (JP)
Assigned to MUJIN, Inc., Tokyo (JP)
Filed by MUJIN, Inc., Tokyo (JP)
Filed on Jun. 10, 2022, as Appl. No. 17/806,379.
Application 17/806,379 is a continuation of application No. 16/824,680, filed on Mar. 19, 2020, granted, now 11,389,965.
Claims priority of provisional application 62/879,359, filed on Jul. 26, 2019.
Prior Publication US 2022/0297305 A1, Sep. 22, 2022
This patent is subject to a terminal disclaimer.
Int. Cl. G06T 7/13 (2017.01); G06T 7/33 (2017.01); G06T 7/174 (2017.01); G06T 7/73 (2017.01); G06T 1/00 (2006.01); B25J 9/16 (2006.01); B25J 9/02 (2006.01); B65G 61/00 (2006.01); G06V 10/44 (2022.01); G06V 20/64 (2022.01); B25J 19/02 (2006.01); G06F 18/23 (2023.01)
CPC B25J 9/1697 (2013.01) [B25J 19/023 (2013.01); B65G 61/00 (2013.01); G06F 18/23 (2023.01); G06T 1/0014 (2013.01); G06T 7/13 (2017.01); G06T 7/174 (2017.01); G06T 7/33 (2017.01); G06T 7/73 (2017.01); G06V 10/443 (2022.01); G06V 20/647 (2022.01); B65G 2201/025 (2013.01); G06T 2207/20212 (2013.01)] 20 Claims
OG exemplary drawing
 
1. A method for operating a robotic system, the method comprising:
obtaining a two-dimensional (2D) image and a three-dimensional (3D) image representative of an environment having an object located therein;
based on the 2D image, computing an estimated location of the object;
detecting edges based on analyzing the 2D image;
identifying an object edge set within the detected edges, wherein the object edge set corresponds to the object;
identifying a 3D feature location based on comparing the object edge set and the 3D image, the 3D feature location representing a location of a 3D feature in the 3D image that corresponds to the object edge set; and
generating an object detection result based on the estimated location and the 3D feature location.
 
15. A tangible, non-transient computer-readable medium having instructions stored thereon that, when executed by a processor, cause the processor to perform a method, the method comprising:
based on an image depicting an environment, computing an estimated location of an object located in the environment;
detecting edges in the image;
identifying a subset of the detected edges that correspond to the initial object estimation;
identifying a three-dimensional (3D) location corresponding to a feature that is associated with the edge subset based on comparing the edge subset to a 3D representation of the environment; and
generating object detection result based on comparing the estimated location and the 3D location.
 
20. A robotic system comprising:
at least one processor; and
at least one memory device coupled to the at least one processor, the at least one memory having instructions stored thereon, that when executed by the processor, causes the processor to:
estimating a location of an object located in an environment based on processing an image depicting the environment;
detect edges in the image;
identify a three-dimensional (3D) location of a 3D feature by projecting at least a portion of a detected edge to a 3D representation of the environment;
generate an object detection result corresponding to the object, wherein the object detection result is generated based on the initial object estimation and an offset; and
derive and implement a plan according to the object detection result for operating one or more robotic units to manipulate the object located in the environment.