| CPC H04R 1/04 (2013.01) [A61B 5/0002 (2013.01); A61B 5/0531 (2013.01); A61B 5/277 (2021.01); A61B 5/318 (2021.01); A61B 5/412 (2013.01); A61B 5/7267 (2013.01); A61B 7/04 (2013.01); A61B 8/488 (2013.01); G01P 1/00 (2013.01); G01P 15/08 (2013.01); G10L 25/66 (2013.01); H04R 1/46 (2013.01); H04R 9/025 (2013.01); H04R 9/045 (2013.01); H04R 9/08 (2013.01); A61B 2560/0214 (2013.01); A61B 2560/0252 (2013.01); A61B 2560/0257 (2013.01); A61B 2560/0431 (2013.01); A61B 2560/0443 (2013.01); A61B 2562/0204 (2013.01); A61B 2562/0219 (2013.01)] | 19 Claims |

|
1. A method comprising:
receiving vibroacoustic data corresponding to a first training set of subjects having a bodily condition and a second training set of subjects having an absence of the bodily condition, wherein the vibroacoustic data was recorded by sensing devices, and wherein each of the sensing devices comprises a vibroacoustic sensor module comprising a voice coil component, a magnet component, a connector, and a diaphragm;
segmenting the vibroacoustic data in the time domain into overlapping time windows;
splitting the overlapping time windows in the frequency domain into frequency ranges;
extracting feature sequences from the split windows;
training a machine learning model, using the feature sequences, to compute a biosignature corresponding to the bodily condition;
determining, by the trained machine learning model, and based on the biosignature, a bodily condition of a subject not part of the first or second training set; and
outputting an indication of the bodily condition of the subject.
|