US 11,809,523 B2
Annotating unlabeled images using convolutional neural networks
Dimitris Kastaniotis, Athens (GR); Christos Theocharatos, Patras (GR); and Vassilis Tsagaris, Logos Egio (GR)
Assigned to IRIDA LABS S.A., Patras (GR)
Filed by IRIDA LABS S.A., Patras (GR)
Filed on Feb. 18, 2021, as Appl. No. 17/178,717.
Prior Publication US 2022/0261599 A1, Aug. 18, 2022
Int. Cl. G06T 7/00 (2017.01); G06F 18/214 (2023.01); G06T 7/11 (2017.01); G06N 3/04 (2023.01); G06N 3/088 (2023.01); G06F 18/24 (2023.01)
CPC G06F 18/2155 (2023.01) [G06F 18/24 (2023.01); G06N 3/04 (2013.01); G06N 3/088 (2013.01); G06T 7/11 (2017.01); G06T 2207/20081 (2013.01); G06T 2207/20084 (2013.01)] 6 Claims
OG exemplary drawing
 
1. A method for learning to generate bounding box and segmentation masks from categorical labeled images comprising:
collecting two or more images from two or more categories, that are fed, as in, in a process pipeline, and for each of said images:
localizing boundaries of objects within the images in an unsupervised learning manner by utilizing a deep Convolutional Neural Network (CNN) classification model, with a global average pool layer, configured to generate soft object proposals and configured to generate weak binary masks around the objects by applying a threshold on activation maps of the classification CNN;
using the threshold to define a segmentation mask and assign pixels in object/non-object categories;
modeling a distribution of object/non-object pixels represented as vectors learnt from the classification CNN;
using the modeled distribution and a threshold to assign pixels to object/non-object categories and extract segmentation masks;
training a segmentation CNN model on extracted coarse segmentation masks, thereby determining finer object boundaries;
generating novel annotated images by arbitrarily blending segmented objects with other background images;
generating bounding boxes by fitting a rectangle on the fine segmentation masks; and
outputting the annotated images.