US 12,112,035 B2
Recognition and processing of gestures in a graphical user interface using machine learning
Noam Bar-on, Mountain View, CA (US)
Assigned to ATLASSIAN PTY LTD., Sydney (AU); and ATLASSIAN US, INC., San Francisco, CA (US)
Filed by Atlassian Pty Ltd., Sydney (AU); and Atlassian Inc., San Francisco, CA (US)
Filed on Dec. 3, 2021, as Appl. No. 17/541,901.
Application 17/541,901 is a continuation of application No. 16/895,809, filed on Jun. 8, 2020, granted, now 11,209,978.
Application 16/895,809 is a continuation of application No. 16/298,756, filed on Mar. 11, 2019, granted, now 10,719,230, issued on Jul. 21, 2020.
Claims priority of provisional application 62/737,227, filed on Sep. 27, 2018.
Prior Publication US 2022/0091735 A1, Mar. 24, 2022
This patent is subject to a terminal disclaimer.
Int. Cl. G06F 3/04883 (2022.01); G06F 3/041 (2006.01); G06F 3/0482 (2013.01); G06N 3/08 (2023.01)
CPC G06F 3/04883 (2013.01) [G06F 3/0416 (2013.01); G06F 3/0482 (2013.01); G06N 3/08 (2013.01)] 18 Claims
OG exemplary drawing
 
1. A computer-implemented method comprising:
causing display of a graphical user interface on a client device, the graphical user interface including a list of content item regions;
receiving a first gesture input at the particular content item region;
in response to determining that a start of the first gesture input is within a predefined distance from a sub region associated with the particular content item region, wherein the sub region comprises a subset of the content item region and receiving the first gesture input includes detecting inputs that extend outside the sub region and the content item region:
interpreting the gesture as a writing input;
analyzing the writing input to identify an action;
causing the action to be performed on the particular content item; and
updating the display of the particular content item region to indicate that the action has been performed;
receiving a second gesture input at the particular content item region; and
in response to determining that the second gesture input is a manipulation input:
analyzing the manipulation input to determine a manipulation amount; and
causing the list of content item regions to scroll within the graphical user interface a distance that corresponds to the manipulation amount.