CPC G16H 40/67 (2018.01) [G06F 1/163 (2013.01); G06Q 40/08 (2013.01); G06V 20/20 (2022.01); G16H 20/00 (2018.01)] | 13 Claims |
1. A system for assisting a subject with an ailment, the system comprising:
at least one camera configured to capture real-time visual data of a surrounding area of the subject;
at least one gyroscope configured to track orientation and movement of the subject;
at least one augmented reality (AR) device configured to overlay customized real-time visual cues;
at least one biometric sensor configured to continuously monitor physiological data of the subject;
at least one microphone configured to capture audio;
at least one haptic actuator configured to provide adaptive tactile feedback to the subject;
at least one audio output device configured to deliver real-time audio to the subject, the at least one audio output device comprising at least one of headphones, a headset, and a set of earbuds;
at least one processor in operable communication with the at least one camera, the at least one gyroscope, the at least one AR device, the at least one biometric sensor, the at least one microphone, the at least one haptic actuator, and the at least one audio output device; and
a machine-readable medium in operable communication with the processor and having instructions stored thereon that, when executed by the processor, perform the following steps:
a) receiving input data about the subject and the surrounding area of the subject, the input data comprising the real-time visual data received from the at least one camera, data received from the at least one gyroscope, the physiological data received from the at least one biometric sensor, and data received from the at least one microphone;
b) utilizing a machine learning (ML) algorithm to process the input data and generate a context-aware, predictive action plan to assist the subject, the ML algorithm being trained on sensor fusion data, and the action plan dynamically adapts based on changes in the input data; and
c) providing the action plan to the subject via:
i) real-time AR visual cues provided to the subject by the at least one AR device;
ii) audio feedback provided to the subject by the at least one audio output device; and
iii) haptic feedback provided to the subject by the at least one haptic actuator, the haptic feedback comprising a vibration to alert the subject of an immediate risk in the surrounding area of the subject, and
the utilizing of the ML algorithm in step b) comprising:
detecting any deviation from an expected behavior pattern of the subject using a first long short-term memory (LSTM) recurrent neural network (RNN) trained on previous behavior data of the subject;
using a first convolutional neural network (CNN) on the real-time visual data received from the at least one camera to assess the surrounding area of the subject;
using a second RNN to manage time series data of the input data to assess the surrounding area of the subject;
using a second CNN trained on a dataset of labeled facial images to perform facial recognition on any people within the surrounding area of the subject;
using a time series analysis technique to detect patterns and trends in the data received from the at least one biometric sensor to keep track of the ailment of the subject; and
adaptively adjusting the action plan using reinforcement learning (RL) based on preferences of the subject.
|