US 12,232,890 B2
Electrocardiogram (ECG) signal quality evaluation method based on multi-scale convolutional and densely connected network
Minglei Shu, Jinan (CN); Rui Qu, Jinan (CN); Pengyao Xu, Jinan (CN); Shuwang Zhou, Jinan (CN); and Zhaoyang Liu, Jinan (CN)
Assigned to QILU UNIVERSITY OF TECHNOLOGY (SHANDONG ACADEMY OF SCIENCES), Jinan (CN); and SHANDONG COMPUTER SCIENCE CENTER (NATIONAL SUPERCOMPUTING CENTER IN JINAN), Jinan (CN)
Filed by Qilu University of Technology (Shandong Academy of Sciences), Jinan (CN); and SHANDONG COMPUTER SCIENCE CENTER (NATIONAL SUPERCOMPUTING CENTER IN JINAN), Jinan (CN)
Filed on Dec. 28, 2023, as Appl. No. 18/398,263.
Claims priority of application No. 202310822941.2 (CN), filed on Jul. 6, 2023.
Prior Publication US 2025/0009306 A1, Jan. 9, 2025
Int. Cl. A61B 5/00 (2006.01); A61B 5/308 (2021.01); G06N 3/0464 (2023.01)
CPC A61B 5/7221 (2013.01) [A61B 5/308 (2021.01); A61B 5/7203 (2013.01); G06N 3/0464 (2023.01)] 9 Claims
OG exemplary drawing
 
1. An electrocardiograph (ECG) signal quality evaluation method based on a multi-scale convolutional and densely connected network, comprising:
a) obtaining n original ECG signals and corresponding labels of the n original ECG signals in a dataset to obtain an original ECG signal set S, wherein S={s1, s2, . . . , sk, . . . , sn}, sk represents a kth ECG signal, k∈{1, 2, . . . , n}, a corresponding label of the kth ECG signal sk is lk, an ECG signal label set is L, and L={l1, l2, . . . , lk, . . . , ln};
b) preprocessing the kth ECG signal sk to remove a baseline drift and power line interference from the ECG signal sk to obtain a preprocessed ECG signal xk, wherein a preprocessed ECG signal set is X, and X={x1, x2, . . . , xk, . . . , xn};
c) segmenting the preprocessed ECG signal xk to obtain i ECG signal fragments {xk1, xk2, . . . , xki}, wherein corresponding labels of the i ECG signal fragments {xk1, xk2, . . . , xki} are {lk1, lk2, . . . lki}, and a segmented signal fragment set is Xseg, Xseg={{x11, x12, . . . , x1i}, {x21, x22, . . . , x2i}, . . . , {xk1, xk2, . . . , xki}, . . . , {xn1, xn2, . . . , xni}}, a segmented signal label set is Lseg, and Lseg={{l11, l12, . . . , l1i}, {l21, l22, . . . , l2i}, . . . , {lk1, lk2, . . . , lki}, . . . , {ln1, ln2, . . . , lni}};
d) inputting each ECG signal fragment in the signal fragment set Xseg into a trained AlexNet model to obtain an evaluation-specific ECG signal fragment set Xfinal; and
e) establishing an improved lightweight densely connected quality classification model, and inputting an ECG signal fragment in the evaluation-specific ECG signal fragment set Xfinal into the improved lightweight densely connected quality classification model to obtain a classification result;
wherein the step e) comprises the following substeps:
e-1) constituting the improved lightweight densely connected quality classification model by a feature extraction module and a classification module, wherein the feature extraction module is constituted by a first multi-scale channel attention module MCA1, a second multi-scale channel attention module MCA2, a third multi-scale channel attention module MCA3, a first multi-scale feature densely connected module MFD1, a second multi-scale feature densely connected module MFD2, and a third multi-scale feature densely connected module MFD3, and the classification module is constituted by a linear layer;
e-2) constituting the first multi-scale channel attention module MCA1 by a first convolution unit, a second convolution unit, a third convolution unit, a squeeze-and-excitation (SE) attention module, and an average pooling layer, wherein the first convolution unit is sequentially constituted by a convolutional layer, a batch normalization (BN) layer, and a Relu activation function layer, the second convolution unit is sequentially constituted by a convolutional layer, a BN layer, and a Relu activation function layer, the third convolution unit is sequentially constituted by a convolutional layer, a BN layer, and a Relu activation function layer;
inputting the ECG signal fragment in the evaluation-specific ECG signal fragment set Xfinal into the first convolution unit to obtain a shallow feature signal Xmca1_f0;
inputting the ECG signal fragment in the evaluation-specific ECG signal fragment set Xfinal into the second convolution unit to obtain a shallow feature signal Xmca1_f1;
inputting the ECG signal fragment in the evaluation-specific ECG signal fragment set Xfinal into the third convolution unit to obtain a shallow feature signal Xmca1_f2;
concatenating the shallow feature signal Xmca1_f0, the shallow feature signal Xmca1_f1, and the shallow feature signal Xmca1_f2 to obtain a feature signal Xmca1_f3;
inputting the feature signal Xmca1_f3 into the SE attention module to obtain important feature information Xmca1_fse; and
inputting the important feature information Xmca1_fse into the average pooling layer to obtain a feature signal Xmca1_f4;
e-3) constituting the first multi-scale feature densely connected module MFD1 by a first densely connected layer, a second densely connected layer, a third densely connected layer, a fourth densely connected layer, a fifth densely connected layer, and a sixth densely connected layer, wherein the first densely connected layer, the second densely connected layer, the third densely connected layer, the fourth densely connected layer, the fifth densely connected layer, and the sixth densely connected layer each are sequentially constituted by a first BN layer, a first Relu activation function layer, a first dilated convolutional layer, a first sigmoid activation function layer, a second BN layer, a second Relu activation function layer, a multi-scale convolutional layer, and a second sigmoid activation function layer;
inputting the feature signal Xmca1_f4 into the first densely connected layer to obtain a feature signal Xmfd1_f1;
concatenating the feature signal Xmca1_f4 and the feature signal Xmfd1_f1 to obtain a first concatenated signal, and inputting the first concatenated signal into the second densely connected layer to obtain a feature signal Xmfd1_f2;
concatenating the feature signal Xmca1_f4, the feature signal Xmfd1_f1, and the feature signal Xmfd1_f2 to obtain a second concatenated signal, and inputting the second concatenated signal into the third densely connected layer to obtain a feature signal Xmfd1_f3;
concatenating the feature signal Xmca1_f4, the feature signal Xmfd1_f1, the feature signal Xmfd1_f2, and the feature signal Xmfd1_f3 to obtain a third concatenated signal, and inputting the third concatenated signal into the fourth densely connected layer to obtain a feature signal Xmfd1_f4;
concatenating the feature signal Xmca1_f4, the feature signal Xmfd1_f1, the feature signal Xmfd1_f2, the feature signal Xmfd1_f3, and the feature signal Xmfd1_f4 to obtain a fourth concatenated signal, and inputting the fourth concatenated signal into the fifth densely connected layer to obtain a feature signal Xmfd1_f5; and
concatenating the feature signal Xmca1_f4, the feature signal Xmfd1_f1, the feature signal Xmfd1_f2, the feature signal Xmfd1_f3, the feature signal Xmfd1_f4, and the feature signal Xmfd1_f5 to obtain a fifth concatenated signal, and inputting the fifth concatenated signal into the sixth densely connected layer to obtain a feature signal Xmfd1_f6;
e-4) constituting the second multi-scale channel attention module MCA2 by a first convolution unit, a second convolution unit, a third convolution unit, an SE attention module, and an average pooling layer, wherein the first convolution unit is sequentially constituted by a convolutional layer, a BN layer, and a Relu activation function layer, the second convolution unit is sequentially constituted by a convolutional layer, a BN layer, and a Relu activation function layer, the third convolution unit is sequentially constituted by a convolutional layer, a BN layer, and a Relu activation function layer;
inputting the feature signal Xmfd1_f6 into the first convolution unit to obtain a shallow feature signal Xmca2_f0;
inputting the feature signal Xmfd1_f6 into the second convolution unit to obtain a shallow feature signal Xmca2_f1;
inputting the feature signal Xmfd1_f6 into the third convolution unit to obtain a shallow feature signal Xmca2_f2;
concatenating the shallow feature signal Xmca2_f0, the shallow feature signal Xmca2_f1, and the shallow feature signal Xmca2_f2 to obtain a feature signal Xmca2_f3;
inputting the feature signal Xmca2_f3 into the SE attention module to obtain important feature information Xmca2_fse; and
inputting the important feature information Xmca2_fse into the average pooling layer to obtain a feature signal Xmca2_f4;
e-5) constituting the second multi-scale feature densely connected module MFD2 by a first densely connected layer, a second densely connected layer, a third densely connected layer, and a fourth densely connected layer, wherein the first densely connected layer, the second densely connected layer, the third densely connected layer, and the fourth densely connected layer each are sequentially constituted by a first BN layer, a first Relu activation function layer, a first dilated convolutional layer, a first sigmoid activation function layer, a second BN layer, a second Relu activation function layer, a multi-scale convolutional layer, and a second sigmoid activation function layer;
inputting the feature signal Xmca2_f4 into the first densely connected layer to obtain a feature signal X′mfd2_f1;
concatenating the feature signal Xmca2_f4 and the feature signal X′mfd2_f1 to obtain a sixth concatenated signal, and inputting the sixth concatenated signal into the second densely connected layer to obtain a feature signal X′mfd2_f2;
concatenating the feature signal Xmca2_f4, the feature signal X′mfd2_f1, and the feature signal X′mfd2_f2 to obtain a seventh concatenated signal, and inputting the seventh concatenated signal into the third densely connected layer to obtain a feature signal X′mfd2_f3; and
concatenating the feature signal Xmca2_f4, the feature signal X′mfd2_f1, the feature signal X′mfd2_f2, and the feature signal X′mfd2_f3 to obtain an eighth concatenated signal, and inputting the eighth concatenated signal into the fourth densely connected layer to obtain a feature signal X′mfd2_f4;
e-6) constituting the third multi-scale channel attention module MCA3 by a first convolution unit, a second convolution unit, a third convolution unit, an SE attention module, and an average pooling layer, wherein the first convolution unit is sequentially constituted by a convolutional layer, a BN layer, and a Relu activation function layer, the second convolution unit is sequentially constituted by a convolutional layer, a BN layer, and a Relu activation function layer, the third convolution unit is sequentially constituted by a convolutional layer, a BN layer, and a Relu activation function layer;
inputting the feature signal X′mfd2_f4 into the first convolution unit to obtain a shallow feature signal Xmca3_f0;
inputting the feature signal X′mfd2_f4 into the second convolution unit to obtain a shallow feature signal Xmca3_f1;
inputting the feature signal X′mfd2_f4 into the third convolution unit to obtain a shallow feature signal Xmca3_f2;
concatenating the shallow feature signal Xmca3_f0, the shallow feature signal Xmca3_f1, and the shallow feature signal Xmca3_f2 to obtain a feature signal Xmca3_f3;
inputting the feature signal Xmca3_f3 into the SE attention module to obtain important feature information Xmca3_fse; and
inputting the important feature information Xmca3_fse into the average pooling layer to obtain a feature signal Xmca3_f4;
e-7) constituting the third multi-scale feature densely connected module MED3 by a first densely connected layer, a second densely connected layer, a third densely connected layer, and a fourth densely connected layer, wherein the first densely connected layer, the second densely connected layer, the third densely connected layer, and the fourth densely connected layer each are sequentially constituted by a first BN layer, a first Relu activation function layer, a first dilated convolutional layer, a first sigmoid activation function layer, a second BN layer, a second Relu activation function layer, a multi-scale convolutional layer, and a second sigmoid activation function layer;
inputting the feature signal Xmca3_f4 into the first densely connected layer to obtain a feature signal X″mfd3_f1;
concatenating the feature signal Xmca3_f4 and the feature signal X″mfd3_f1 to obtain a ninth concatenated signal, and inputting the ninth concatenated signal into the second densely connected layer to obtain a feature signal X″mfd3_f2;
concatenating the feature signal Xmca3_f4, the feature signal X″mfd3_f1, and the feature signal X″mfd3_f2 to obtain a tenth concatenated signal, and inputting the tenth concatenated signal into the third densely connected layer to obtain a feature signal X″mfd3_f3; and
concatenating the feature signal Xmca3_f4, the feature signal X″mfd3_f1, the feature signal X″mfd3_f2, and the feature signal X″mfd3_f3 to obtain an eleventh concatenated signal, and inputting the eleventh concatenated signal into the fourth densely connected layer to obtain a feature signal X″mfd3_f4; and
e-8) inputting the feature signal X″mfd3_f4 into the classification module of the improved lightweight densely connected quality classification model to obtain the classification result, and setting a quantity of output neurons in the linear layer to 3.