US 12,251,232 B2
Multi-label electrocardiogram (ECG) signal classification method based on improved attention mechanism
Yinglong Wang, Jinan (CN); Guoxuan Xu, Jinan (CN); Minglei Shu, Jinan (CN); Zhaoyang Liu, Jinan (CN); and Pengyao Xu, Jinan (CN)
Assigned to QILU UNIVERSITY OF TECHNOLOGY (SHANDONG ACADEMY OF SCIENCES), Jinan (CN); and SHANDONG COMPUTER SCIENCE CENTER (NATIONAL SUPERCOMPUTING CENTER IN JINAN), Jinan (CN)
Filed by Qilu University of Technology (Shandong Academy of Sciences), Jinan (CN); and SHANDONG COMPUTER SCIENCE CENTER (NATIONAL SUPERCOMPUTING CENTER IN JINAN), Jinan (CN)
Filed on Oct. 12, 2023, as Appl. No. 18/379,182.
Claims priority of application No. 202310195187.4 (CN), filed on Mar. 3, 2023.
Prior Publication US 2024/0293070 A1, Sep. 5, 2024
Int. Cl. A61B 5/36 (2021.01); A61B 5/00 (2006.01); A61B 5/367 (2021.01)
CPC A61B 5/367 (2021.01) [A61B 5/7264 (2013.01)] 6 Claims
OG exemplary drawing
 
1. A multi-label electrocardiogram (ECG) signal classification method based on an improved attention mechanism, comprising the following steps:
a) preprocessing a multi-label ECG signal to acquire a preprocessed multi-label ECG signal X;
b) establishing a multi-scale feature extraction module, and inputting the preprocessed multi-label ECG signal X into the multi-scale feature extraction module to acquire an attention feature map Xs;
c) establishing a deep attention feature fusion (DAFF) network, and inputting the attention feature map Xs into the DAFF network to acquire a fused feature X′s; and
d) establishing a classification module, and inputting the feature X′s into the classification module to acquire an ECG signal classification result;
wherein step b) comprises the following sub-steps:
b-1) forming the multi-scale feature extraction module with a residual module A and a residual module B, wherein the residual module A comprises a batch normalization (BN) layer, a rectified linear unit (ReLU) activation function layer, a convolutional layer, a maximum pooling layer, and an attention fusion module; and the residual module B comprises a BN layer, a ReLU activation function layer, a convolutional layer, a maximum pooling layer, and an attention fusion module;
b-2) inputting the preprocessed multi-label ECG signal X into the BN layer, the ReLU activation function layer, and the convolutional layer of the residual module A sequentially to acquire a feature map Xsc1;
b-3) inputting the preprocessed multi-label ECG signal X into the maximum pooling layer of the residual module A to acquire a feature map Xsm1;
b-4) forming the attention fusion module of the residual module A with a local attention block and a global attention block, wherein the local attention block is formed sequentially with a first convolutional layer, a BN layer, a ReLU activation function layer, and a second convolutional layer; and inputting the feature map Xsc1 into the local attention block to acquire a local attention feature map Xsc_l1;
b-5) forming the global attention block of the attention fusion module in the residual module A sequentially with an average pooling layer, a first convolutional layer, a BN layer, a ReLU activation function layer, and a second convolutional layer; and inputting the feature map Xsm1 into the global attention block to acquire a global attention feature map Xsc_g1;
b-6) adding the local attention feature map Xsc_l1 to the global attention feature map Xsc_g1 to acquire an attention feature map Xsa1;
b-7) inputting the preprocessed multi-label ECG signal X into the BN layer, the ReLU activation function layer, and the convolutional layer of the residual module B sequentially to acquire a feature map Xsc2;
b-8) inputting the preprocessed multi-label ECG signal X into the maximum pooling layer of the residual module B to acquire a feature map Xsm2;
b-9) forming the attention fusion module of the residual module B with a local attention block and a global attention block, wherein the local attention block is formed sequentially with a first convolutional layer, a BN layer, a ReLU activation function layer, and a second convolutional layer; and inputting the feature map Xsc2 into the local attention block to acquire a local attention feature map Xsc_l2;
b-10) forming the global attention block of the attention fusion module in the residual module B sequentially with an average pooling layer, a first convolutional layer, a BN layer, a ReLU activation function layer, and a second convolutional layer; and inputting the feature map Xsm2 into the global attention block to acquire a global attention feature map Xsc_g2;
b-11) adding the local attention feature map Xsc_l1 to the global attention feature map Xsc_g2 to acquire an attention feature map Xsa2; and
b-12) adding the attention feature map Xsa1 to the attention feature map Xsa2 to acquire a feature map Xs.