US 11,992,331 B2
Neonatal pain identification from neonatal facial expressions
Ghadh Alzamzmi, North Bethesda, MD (US); Dmitry Goldgof, Lutz, FL (US); Rangachar Kasturi, Tampa, FL (US); Terri Ashmeade, Tampa, FL (US); Yu Sun, Tampa, FL (US); Rahul Paul, Tampa, FL (US); and Md Sirajus Salekin, Tampa, FL (US)
Assigned to University of South Florida, Tampa, FL (US)
Filed by University of South Florida, Tampa, FL (US)
Filed on Oct. 19, 2020, as Appl. No. 17/073,568.
Application 17/073,568 is a continuation of application No. PCT/US2019/028277, filed on Apr. 19, 2019.
Claims priority of provisional application 62/660,038, filed on Apr. 19, 2018.
Claims priority of provisional application 62/660,072, filed on Apr. 19, 2018.
Prior Publication US 2021/0030354 A1, Feb. 4, 2021
Int. Cl. A61B 5/00 (2006.01); A61B 5/1171 (2016.01); G06N 3/02 (2006.01)
CPC A61B 5/4824 (2013.01) [A61B 5/1176 (2013.01); G06N 3/02 (2013.01); A61B 2503/045 (2013.01)] 17 Claims
OG exemplary drawing
 
1. A computer-implemented method for identifying when a neonate of interest is experiencing pain, the method comprising:
training a neonatal convolutional neural network (N-CNN) using a neonatal pain assessment database (NPAD), the neonatal pain assessment database (NPAD) comprising images of a plurality of neonate faces acquired under a pain condition and images of the plurality of neonate faces acquired under a no-pain condition, to establish a trained N-CNN;
monitoring a face of the neonate of interest with a video image capture device to capture image data of the face of the neonate of interest;
applying the trained N-CNN to the image data captured by the video image capture device to determine if the neonate of interest is experiencing the pain condition or the no-pain condition, wherein applying the trained N-CNN to the image data further comprises:
preprocessing the image data captured by the video image capture device to generate a plurality of preprocessed frames focused on the face of the neonate of interest;
performing a combination of a convolution layer and a max pooling operation on each of the plurality of preprocessed frames at a right branch of the N-CNN to extract prominent features from the plurality of preprocessed frames, at a left branch of the N-CNN to extract generic features from the plurality of preprocessed frames and at a central branch of the N-CNN to extract deep features from the plurality of preprocessed frames;
merging the prominent features from the plurality of preprocessed frames extracted at the right branch, the generic features extracted at the left branch and the deep features extracted at the central branch to generate merged results;
performing a combination of a convolution layer and a max pooling operation of the merged results to determine if the neonate of interest is experiencing the pain condition or the no-pain condition; and
providing an output from the N-CNN indicating whether the neonate of interest is experiencing the pain condition or the no-pain condition.