US 11,915,477 B2
Video processing system and video processing method using split learning
Joongheon Kim, Seoul (KR); Yoo Jeong Ha, Seoul (KR); Minjae Yoo, Seongnam-si (KR); and SooHyun Park, Incheon (KR)
Assigned to Korea University Research and Business Foundation, Seoul (KR)
Filed by Korea University Research and Business Foundation, Seoul (KR)
Filed on Apr. 14, 2022, as Appl. No. 17/720,735.
Claims priority of application No. 10-2021-0151177 (KR), filed on Nov. 5, 2021.
Prior Publication US 2023/0146260 A1, May 11, 2023
Int. Cl. G06V 20/17 (2022.01); B64C 39/02 (2023.01); G06V 20/52 (2022.01); G06N 3/045 (2023.01); B64U 101/30 (2023.01)
CPC G06V 20/17 (2022.01) [B64C 39/024 (2013.01); G06N 3/045 (2023.01); G06V 20/52 (2022.01); B64U 2101/30 (2023.01); B64U 2201/20 (2023.01)] 7 Claims
OG exemplary drawing
 
1. A video processing system comprising:
multiple unmanned aerial vehicles (UAVs) configured to capture a video of a fire site,
wherein each UAV has a control unit including an input layer and a first hidden layer; and
a central server connected to the multiple UAVs by wireless communication,
wherein the control unit of each UAV separately comprises the input layer and the first hidden layer of a deep neural network, and a central control unit of the central server separately comprises multiple hidden layers and an output layer of the deep neural network,
wherein the deep neural network is composed of the input layer, the first hidden layer, the multiple hidden layers, and the output layer, and
wherein the control unit of each UAV is configured to extract a feature map obtained by distorting the video of the fire site by inputting pixel information of the video to the input layer and executing a convolution operation using a filter via the first hidden layer.