US 12,482,106 B2
Method and electronic device for segmenting objects in scene
Biplab Chandra Das, Bangalore (IN); Kiran Nanjunda Iyer, Bangalore (IN); Shouvik Das, Kolkata (IN); and Himadri Sekhar Bandyopadhyay, Kolkata (IN)
Assigned to SAMSUNG ELECTRONICS CO., LTD., Suwon-si (KR)
Filed by SAMSUNG ELECTRONICS CO., LTD., Suwon-si (KR)
Filed on Nov. 8, 2022, as Appl. No. 17/983,119.
Application 17/983,119 is a continuation of application No. PCT/KR2022/017489, filed on Nov. 8, 2022.
Claims priority of application No. 202141051046 (IN), filed on Oct. 21, 2021; and application No. 202141051046 (IN), filed on Nov. 8, 2021.
Prior Publication US 2023/0131589 A1, Apr. 27, 2023
Int. Cl. G06T 7/11 (2017.01); G06T 7/73 (2017.01); G06V 10/77 (2022.01); G06V 10/80 (2022.01); G06V 10/82 (2022.01); G06V 20/70 (2022.01)
CPC G06T 7/11 (2017.01) [G06T 7/73 (2017.01); G06V 10/7715 (2022.01); G06V 10/806 (2022.01); G06V 10/82 (2022.01); G06V 20/70 (2022.01); G06T 2207/20016 (2013.01); G06T 2207/20084 (2013.01)] 18 Claims
OG exemplary drawing
 
1. A method for segmenting objects in a scene by an electronic device, the method comprising:
inputting at least one input frame of the scene into a pre-trained neural network model, the scene comprising a plurality of objects;
determining a position and a shape of each object of the plurality of objects in the scene using the pre-trained neural network model;
determining an array of coefficients for pixels associated with each object of the plurality of objects in the scene using the pre-trained neural network model; and
generating a segment mask for each object of the plurality of objects based on the position, the shape, and the array of coefficients for each object of the plurality of objects in the scene,
wherein the generating the segment mask for each object of the plurality of objects comprises:
obtaining semantically aware center maps and shape aware prototype masks associated with each object of the plurality of objects in the scene,
determining a linear combination of the semantically aware center maps and the shape aware prototype masks weighted by corresponding coefficients of the array of coefficients on each center location, and
generating the segment mask for each object of the plurality of objects based on the linear combination of the semantically aware center maps and the shape aware prototype masks.