US 12,094,243 B2
Method and apparatus for discreet person identification on pocket-size offline mobile platform with augmented reality feedback with real-time training capability for usage by universal users
Patrick Michael Stockton, San Antonio, TX (US); and Eugene Britto John, San Antonio, TX (US)
Assigned to Board of Regents, The University of Texas System, Austin, TX (US)
Filed by Board of Regents, The University of Texas System, Austin, TX (US)
Filed on May 19, 2021, as Appl. No. 17/324,909.
Claims priority of provisional application 63/027,326, filed on May 19, 2020.
Prior Publication US 2021/0365673 A1, Nov. 25, 2021
Int. Cl. G06V 40/16 (2022.01); G06F 1/16 (2006.01); G06N 20/00 (2019.01); G06V 40/18 (2022.01); G06V 40/20 (2022.01); G10L 17/04 (2013.01)
CPC G06V 40/172 (2022.01) [G06F 1/163 (2013.01); G06N 20/00 (2019.01); G06V 40/197 (2022.01); G06V 40/25 (2022.01); G10L 17/04 (2013.01)] 11 Claims
OG exemplary drawing
 
9. A system comprising:
a wearable device comprising a visual display, an audio output device, and at least one sensor;
a computer system comprising a processor, memory accessible by the processor, and computer program instructions stored in the memory and executable by the processor;
receiving, from the at least one sensor, data captured by the sensor;
identifying a person based on the data captured by the sensor using a trained machine learning model; and
indicating an identity of the person using at least one of the visual display and the audio output device, wherein the wearable device comprises glasses, wherein the sensors comprise at least one of a camera, a microphone, a body odor sensor, and a thermal imaging camera and wherein the data captured by the sensors are used to perform at least one of the methods selected from the group consisting of:
update the trained machine learning model in real-time upon user request and/or command; and
train an un-trained machine learning model in real-time upon user request and/or command.