US 11,704,803 B2
Methods and systems using video-based machine learning for beat-to-beat assessment of cardiac function
David Ouyang, Los Angeles, CA (US); Bryan He, Stanford, CA (US); James Zou, Stanford, CA (US); and Euan A. Ashley, Stanford, CA (US)
Assigned to The Board of Trustees of the Leland Stanford Junior University, Stanford, CA (US)
Filed by The Board of Trustees of the Leland Stanford Junior University, Stanford, CA (US)
Filed on Mar. 30, 2021, as Appl. No. 17/218,052.
Claims priority of provisional application 63/002,098, filed on Mar. 30, 2020.
Prior Publication US 2021/0304410 A1, Sep. 30, 2021
Int. Cl. G06K 9/00 (2022.01); G06T 7/10 (2017.01)
CPC G06T 7/10 (2017.01) [G06T 2207/10016 (2013.01); G06T 2207/10132 (2013.01); G06T 2207/20081 (2013.01); G06T 2207/30048 (2013.01); G06T 2207/30104 (2013.01)] 10 Claims
OG exemplary drawing
 
1. A method for analyzing images obtained from an echocardiogram, comprising:
obtaining a cardiac ultrasound video of an individual's heart illustrating at least one view of a heart of the patient and comprising a plurality of cardiac cycles;
assessing at least one cardiac parameter based on the cardiac ultrasound video, comprising:
using a first machine learning model comprising spatiotemporal convolutions,
using a second machine learning model comprising atrous convolutions, to generate frame-level semantic segmentation of a left ventricle throughout a cardiac cycle, and
performing a beat-by-beat evaluation based on the spatiotemporal convolution and semantic segmentation to generate a plurality of clips of frames, wherein each clip of frames in the plurality of clips of frames represents one cardiac cycle and determining an ejection fraction for each cardiac cycle; and
outputting an average ejection fraction of the heart for the patient based on the plurality of clips of frames.