CPC G06V 10/82 (2022.01) [E21B 44/00 (2013.01); G06N 3/08 (2013.01); G06T 7/0004 (2013.01); G06T 7/74 (2017.01); G06V 10/7715 (2022.01); G06V 10/774 (2022.01); G06V 20/10 (2022.01); E21B 2200/22 (2020.05); G01N 15/1433 (2024.01); G06T 2207/20081 (2013.01); G06T 2207/20084 (2013.01); G06T 2207/30181 (2013.01)] | 18 Claims |
1. A method, comprising:
training, via an analysis and control system, a neural network model using a first set of photographs, wherein each photograph of the first set of photographs depicts a first set of objects and include one or more annotations relating to each object of the first set of objects;
manually arranging a plurality of cuttings in a relatively sparse configuration on a tray having a relatively vivid background color;
generating a second set of photographs that depict the plurality of cuttings arranged in the relatively sparse configuration on the tray having the relatively vivid background color;
automatically creating, via the analysis and control system, mask images corresponding to the plurality of cuttings depicted by the second set of photographs;
enabling, via the analysis and control system, manual fine tuning of the mask images corresponding to the plurality of cuttings depicted by the second set of photographs;
re-training, via the analysis and control system, the neural network model using the second set of photographs, wherein the re-training is based at least in part on the manual fine tuning of the mask images corresponding to the plurality of cuttings depicted by the second set of photographs; and
identifying, via the analysis and control system, one or more individual cuttings in a third set of photographs using the re-trained neural network model.
|