US 11,734,827 B2
User guided iterative frame and scene segmentation via network overtraining
Gary Bradski, Palo Alto, CA (US)
Assigned to Matterport, Inc., Sunnyvale, CA (US)
Filed by Matterport, Inc., Sunnyvale, CA (US)
Filed on May 11, 2021, as Appl. No. 17/317,755.
Application 17/317,755 is a continuation of application No. 16/411,739, filed on May 14, 2019, granted, now 11,004,203.
Prior Publication US 2021/0264609 A1, Aug. 26, 2021
This patent is subject to a terminal disclaimer.
Int. Cl. G06T 7/11 (2017.01); G06T 11/20 (2006.01); G06T 7/187 (2017.01); G06N 3/08 (2023.01); G06F 3/0488 (2022.01); G06F 3/04883 (2022.01)
CPC G06T 7/11 (2017.01) [G06F 3/04883 (2013.01); G06N 3/08 (2013.01); G06T 7/187 (2017.01); G06T 11/203 (2013.01)] 22 Claims
OG exemplary drawing
 
1. A computer-implemented method comprising:
selecting a frame from a scene;
generating a first frame segmentation using: (i) the frame; and (ii) a segmentation network;
displaying on a device: (i) the frame; and (ii) the first frame segmentation overlaid on the frame;
receiving a correction input directed to the frame;
training the segmentation network using the correction input;
iterating through the generating segmentation step, displaying segmentation step, receiving correction input step, and training the segmentation step until a number of iterations reaches a predetermined threshold, the predetermined threshold being determined based on a statistical variation within the scene;
generating, after training the segmentation network using the correction input, a revised frame segmentation using: (i) the frame; and (ii) the segmentation network; and
displaying on the device: (i) the frame; and (ii) the revised frame segmentation overlaid on the frame.