CPC G06V 10/764 (2022.01) [G06V 10/32 (2022.01); G06V 10/7715 (2022.01); G06V 10/774 (2022.01); G06V 10/776 (2022.01); G06V 20/194 (2022.01)] | 7 Claims |
1. A hyperspectral image classification method based on context-rich networks, comprising following steps:
step 1, pre-processing a hyperspectral image;
step 2, selecting a training set, and generating a ground truth label map with the same width and height as the image in step 1, with a value of a category ID, and setting pixels that are not position selected in the label map as a background value to be ignored in subsequent calculation of a loss function;
step 3, constructing a context-rich network, an overall structure of the network is divided into three parts: a feature extraction module, a context-rich information capturing module, and a classification module;
wherein the feature extraction module is used to process the inputted pre-processed hyperspectral image to obtain a feature X;
a specific process of the context-rich information capturing module is: the feature X obtained by the feature extraction module is meshed into grids of different sizes in different paths parallelly, in each path, an intra-feature relationship extraction is performed within each grid in the grids respectively to complete a spatial context information aggregation, a PAM module (parallel attention module) is used to realize a spatial context information extraction process in the grids; a feature set is obtained after completing the spatial context information extraction process in each path, then, a scale context-aware module is used to extract a contextual relationship between features, obtaining the features that have both spatial and scale contexts, finally, inputting this feature into the classification module;
the classification module is used to predict a classification map;
step 4, training the context-rich network with the training set to achieve convergence of the loss function;
step 5, a hyperspectral image classification is completed by inputting the image pre-processed and to be classified in step 1 to the trained context-rich network.
|