US 12,293,555 B2
Method and device of inputting annotation of object boundary information
Nam Gil Kim, Bucheon-si (KR); and Barom Kang, Seoul (KR)
Assigned to SELECT STAR, INC., Daejeon (KR)
Filed by SELECT STAR, INC., Daejeon (KR)
Filed on Feb. 21, 2022, as Appl. No. 17/676,583.
Claims priority of application No. 10-2021-0004653 (KR), filed on Jan. 13, 2021; and application No. 10-2021-0022841 (KR), filed on Feb. 19, 2021.
Prior Publication US 2022/0270341 A1, Aug. 25, 2022
Int. Cl. G06V 10/22 (2022.01); G06V 10/26 (2022.01); G06V 10/82 (2022.01); G06V 20/40 (2022.01)
CPC G06V 10/235 (2022.01) [G06V 10/26 (2022.01); G06V 10/82 (2022.01); G06V 20/41 (2022.01)] 7 Claims
OG exemplary drawing
 
1. A method, which is executed in a computing system including at least one processor and at least one memory, of inputting annotation of object boundary information, the method comprising:
a bounding information input step of receiving information on a bounding box inside a general image from a user;
a first prediction control point extraction step of extracting a plurality of control points related to a predicted object boundary from a target image inside the bounding box by using a learned artificial neural network model
a predicted control point display step of overlaying and displaying the predicted control points on the target image inside the bounding box in a form of having reciprocal connection sequences; and
a change input reception step of receiving a position change input for at least one of the control points from the user, wherein when the target image is partially hidden by another image inside the bounding box, the method further comprises:
adjusting the control points to outermost points contouring a single boundary surrounding a specific shape containing the target image and the partially hidden region of the target image;
extracting feature map information corresponding to each of the outermost points of the specific shape;
determining dependencies between the outermost points based on the extracted feature map information using a pre-trained artificial neural network model;
identifying the partially hidden region by comparing a value representing a degree of dependency between the outermost points to a predefined threshold, where the value between the outermost points surrounding the hidden region of the target image is less than the threshold;
removing the partially hidden region from the specific shape by disconnecting connections between the outmost points surrounding the hidden region; and
inferring a boundary of the target image based on boundaries of identified regions of the specific shape segmented by removing the partially hidden region.