US 11,868,678 B2
User interface sound emanation activity classification
Tamer E. Abuelsaad, Somers, NY (US); Gregory J. Boss, Saginaw, MI (US); John E. Moore, Jr., Brownsburg, IN (US); and Randy A. Rendahl, Raleigh, NC (US)
Assigned to Kyndryl, Inc., New York, NY (US)
Filed by Kyndryl, Inc., New York, NY (US)
Filed on Oct. 30, 2019, as Appl. No. 16/668,836.
Application 16/668,836 is a continuation of application No. 15/648,628, filed on Jul. 13, 2017, granted, now 10,503,467.
Prior Publication US 2020/0065063 A1, Feb. 27, 2020
Int. Cl. G06F 3/16 (2006.01); G06F 9/451 (2018.01); G06F 18/00 (2023.01); G06N 20/00 (2019.01); G06N 5/047 (2023.01); G10L 25/51 (2013.01); G06F 8/61 (2018.01)
CPC G06F 3/167 (2013.01) [G06F 9/451 (2018.02); G06F 18/00 (2023.01); G06N 5/047 (2013.01); G06N 20/00 (2019.01); G10L 25/51 (2013.01); G06F 8/61 (2013.01)] 22 Claims
OG exemplary drawing
 
1. A method comprising:
obtaining an audio input, the audio input representing key press sounds emanating from a key press based user interface of a computer device of a user as a result of the user of the computer device pressing keys of the key press based user interface of the computer device;
generating a context pattern based on the audio input representing the key press sounds emanating from the key press based user interface resulting from the user pressing the keys of the key press based user interface, wherein the context pattern includes key press sequence information and timing information;
examining the key press sequence information and the timing information of the context pattern generated based on the audio input representing the key press sounds emanating from the key press based user interface of the computer device of the user resulting from the user pressing the keys of the key press based user interface, and determining, based on the examining, a current key press activity currently engaged in by the user, wherein the generating and the examining are performed by a computing node based system external to the computer device; and
providing an output in dependence on the current key press activity determined to be currently engaged in by the user of the computer device based on the examining the key press sequence information and the timing information of the context pattern, wherein the determining based on the examining the current key press activity currently engaged in by the user includes (a) comparing the context pattern to a plurality of signature patterns stored in a data repository, and (b) matching the context pattern to one or more signature pattern of the plurality of signature patterns, wherein respective ones of the one or more signature pattern includes signature pattern key press sequence information and signature pattern timing information generated from a respective one or more audio input representing key pressing sounds emanating as the result of one or more user pressing user interface keys of a key equipped user interface to perform a key pressing activity according to the determined current key press activity currently engaged in by the user, wherein the method includes performing classification of the context pattern to classify the context pattern as belonging to a signature pattern classification, wherein the signature pattern classification specifies the current key press activity, wherein the providing the output includes providing the output based on the performing classification.