US 12,340,306 B2
Method for training neural network for drone based object detection
Gang Seok Son, Seoul (KR); and Seung On Bang, Seoul (KR)
Assigned to GYNETWORKS CO., LTD., Incheon (KR)
Filed by GYNETWORKS CO., LTD., Incheon (KR)
Filed on Dec. 29, 2021, as Appl. No. 17/564,407.
Claims priority of application No. 10-2020-0186340 (KR), filed on Dec. 29, 2020.
Prior Publication US 2022/0207363 A1, Jun. 30, 2022
Int. Cl. G06N 3/08 (2023.01); G06V 10/77 (2022.01); G06V 10/774 (2022.01); G06V 10/82 (2022.01)
CPC G06N 3/08 (2013.01) [G06V 10/7715 (2022.01); G06V 10/774 (2022.01); G06V 10/82 (2022.01)] 4 Claims
OG exemplary drawing
 
1. A method for training a neural network for object detection based on deep-learning, the method comprising:
receiving a detection target image;
splitting the detection target image into unit images having a predetermined size;
generating a first deformed image by deforming the unit images according to a first rule;
generating a second deformed image by deforming the unit images according to a second rule;
defining an output of the neural network for the second deformed image as a first label value; and
training the neural network by using a loss calculated between an output of the neural network for the first deformed image and the first label value,
wherein the unit images include pixels that influence a tracking of the object, each pixel being an output of a heat map generated by multiplying a feature map extracted from each channel of a convolution layer of the neural network by a corresponding weight obtained from an output of a fully connected layer of the neural network,
wherein a deformation of the first rule for quantitatively defining a deformation degree of the unit images is higher than that of the second rule, and
wherein the first label value is defined as a binary value based on a threshold.