| CPC G06V 20/695 (2022.01) [G06F 18/214 (2023.01); G06N 3/08 (2013.01); G06T 3/4053 (2013.01); G06V 10/235 (2022.01); G06V 20/693 (2022.01); G01N 23/04 (2013.01)] | 15 Claims |

|
1. A charged particle microscope support apparatus, comprising:
first logic to cause a charged particle microscope to generate a single image of a first portion of a specimen;
second logic to generate a first mask based on one or more regions-of-interest provided by user annotation of the single image; and
third logic to train a machine-learning computational model using the single image and the one or more regions-of-interest;
wherein:
the first logic is to cause the charged particle microscope to generate a plurality of images of a corresponding plurality of additional portions of the specimen; and
the second logic is to, after the machine-learning computational model is trained using the single image and the one or more regions-of-interest, generate a plurality of masks based on the corresponding images of the additional portions of the specimen using the machine-learning computational model without retraining.
|
|
9. A charged particle microscope support apparatus, comprising:
first logic to cause a charged particle microscope to generate a single image of a first portion of a specimen;
second logic to generate a first mask based on one or more regions-of-interest indicated by user annotation of the single image, wherein the regions-of-interest include a feature-of-interest in the specimen; and
third logic to train a machine-learning computational model using the single image and the one or more regions-of-interest;
wherein:
the first logic is to cause the charged particle microscope to generate an image of a second portion of the specimen, wherein the second portion of the specimen is proximate to the first portion of the specimen; and
the second logic is to, after the machine-learning computational model is trained using the single image and the one or more regions-of-interest, generate a second mask based on the image of the second portion of the specimen using the machine-learning computational model, wherein the second mask indicates to image regions of the second portion of the specimen including the feature-of-interest and regions of the second portion of the specimen that do not include the feature-of-interest.
|