US 12,449,819 B2
Relative position determination method for multiple unmanned aerial, marine and land vehicles
Guven Cetinkaya, Ankara (TR); and Yakup Genc, Kocaeli (TR)
Assigned to ASELSAN ELEKTRONIK SANAYI VE TICARET ANONIM SIRKETI, Kocaeli (TR)
Filed by ASELSAN ELEKTRONIK SANAYI VE TICARET ANONIM SIRKETI, Ankara (TR); and GEBZE TEKNIK UNIVERSITESI, Kocaeli (TR)
Filed on Apr. 4, 2024, as Appl. No. 18/626,493.
Claims priority of application No. 2023/003707 (TR), filed on Apr. 4, 2023.
Prior Publication US 2024/0338031 A1, Oct. 10, 2024
Int. Cl. G05D 1/24 (2024.01); G05D 1/686 (2024.01); G06T 7/20 (2017.01); G06T 7/73 (2017.01); G06V 10/82 (2022.01); G06V 20/56 (2022.01); G05D 101/15 (2024.01); G05D 111/10 (2024.01)
CPC G05D 1/24 (2024.01) [G05D 1/686 (2024.01); G06T 7/20 (2013.01); G06T 7/73 (2017.01); G06V 10/82 (2022.01); G06V 20/56 (2022.01); G05D 2101/15 (2024.01); G05D 2111/10 (2024.01); G06T 2207/20081 (2013.01); G06T 2207/20084 (2013.01); G06T 2207/30252 (2013.01); G06V 2201/08 (2022.01)] 1 Claim
OG exemplary drawing
 
1. A relative position determination method performed by a processor for multiple unmanned aerial vehicles based on a camera and direct observation in multiple unmanned aerial vehicle systems and the method comprising the steps of:
a. performing a predefined movement in different relative poses by target unmanned aerial vehicle (UAVT) vehicle;
b. collecting data by the observer unmanned aerial vehicle (UAVO) by recording the trajectory on the image plane corresponding to the observer unmanned aerial vehicle (UAVO) performing the predefined movement of the target unmanned aerial vehicle (UAVT) at different relative poses to enable recording of trajectory and actual data for many different relative pose values, by watching target unmanned aerial vehicle (UAVT) through the camera on itself;
c. performing deep learning model training using the collected data, by the observer unmanned aerial vehicle (UAVO);
d. performing a predefined movement of target (collaborator/friend) unmanned aerial (UAVT);
e. taking the image of an observer unmanned aerial vehicle (UAVO) by means of an image capture unit and checking whether there is a target unmanned aerial vehicle on the image taken;
f. determining the bounding box information which is corner point, width and height information, by the observer unmanned aerial vehicle (UAVO) if the target unmanned aerial vehicle (UAVT) is detected in the received image;
g. returning to step e if the target unmanned aerial vehicle (UAVT) is not detected in the received image;
h. tracking the target using image tracking algorithms using bounding box information, by the observer unmanned aerial vehicle (UAVO);
i. extracting features, by the observer unmanned aerial vehicle (UAVO), between consecutive points using bounding box information, preferably taking into account the relationships between the position of the center point of the bounding box in the current and previous images;
j. calculating the 6-DOF relative position between the observer aerial vehicle (UAVO) and the target aerial vehicle (UAVT) by providing the extracted features as input to a deep learning model, by the observer unmanned aerial vehicle (UAVO); and
k. detecting the relative position between the observer aerial vehicle (UAVO) and the target aerial vehicle (UAVT).