US 12,118,777 B2
Method and device for situation awareness
Byeol Teo Park, Daejeon (KR); Han Keun Kim, Hwaseong-si (KR); and Dong Hoon Kim, Daejeon (KR)
Assigned to Seadronix Corp., Ulsan (KR)
Filed by Seadronix Corp., Ulsan (KR)
Filed on Aug. 2, 2023, as Appl. No. 18/364,281.
Application 18/364,281 is a continuation of application No. 17/976,296, filed on Oct. 28, 2022, granted, now 11,776,250.
Application 17/976,296 is a continuation of application No. 17/010,177, filed on Sep. 2, 2020, granted, now 11,514,668, issued on Nov. 29, 2022.
Application 17/010,177 is a continuation in part of application No. 16/557,859, filed on Aug. 30, 2019, granted, now 10,803,360, issued on Oct. 13, 2020.
Claims priority of provisional application 62/741,394, filed on Oct. 4, 2018.
Claims priority of provisional application 62/726,913, filed on Sep. 4, 2018.
Claims priority of application No. 10-2018-0165857 (KR), filed on Dec. 20, 2018; application No. 10-2018-0165858 (KR), filed on Dec. 20, 2018; and application No. 10-2018-0165859 (KR), filed on Dec. 20, 2018.
Prior Publication US 2023/0377326 A1, Nov. 23, 2023
This patent is subject to a terminal disclaimer.
Int. Cl. G06V 20/00 (2022.01); B63B 79/15 (2020.01); G06F 18/21 (2023.01); G06F 18/2431 (2023.01); G06N 3/08 (2023.01); G06T 7/50 (2017.01); G06T 7/70 (2017.01); G08G 3/02 (2006.01); G06T 7/11 (2017.01)
CPC G06V 20/00 (2022.01) [B63B 79/15 (2020.01); G06F 18/21 (2023.01); G06F 18/2431 (2023.01); G06N 3/08 (2013.01); G06T 7/50 (2017.01); G06T 7/70 (2017.01); G08G 3/02 (2013.01); G06T 7/11 (2017.01); G06T 2207/20081 (2013.01); G06T 2207/20084 (2013.01); G06T 2207/30252 (2013.01)] 20 Claims
OG exemplary drawing
 
1. A method for generating a segmentation image, the method comprising:
obtaining a target maritime image generated from a camera, wherein the camera is installed on a port or a vessel, and the target maritime image represents a target object; and
generating a target segmentation image using the target maritime image and a neural network,
wherein the neural network is trained to output a segmentation image in response to an input image,
wherein, in the segmentation image, pixels in an area corresponding to an object represented in the input image are assigned a value corresponding to the object,
wherein the value is selected from identification values including at least a first value, a second value, and a third value,
wherein the first value corresponds to a water surface, the second value corresponds to a first object at a first distance range, and the third value corresponds to the first object at a second distance range different from the first distance range,
wherein the first distance range is closer than the second distance range,
wherein, in the segmentation image, different identifiers are assigned to each of objects corresponding to the second value.