US 11,733,861 B2
Interpreting inputs for three-dimensional virtual spaces from touchscreen interface gestures to improve user interface functionality
Michael Tadros, Boulder, CO (US); Robert Banfield, Riverview, FL (US); Ross Stump, Thornton, CO (US); and Wei Wang, Superior, CO (US)
Assigned to Trimble Inc., Sunnyvale, CA (US)
Filed by Trimble Inc., Sunnyvale, CA (US)
Filed on Nov. 20, 2020, as Appl. No. 17/100,512.
Prior Publication US 2022/0164097 A1, May 26, 2022
Int. Cl. G06F 3/00 (2006.01); G06F 3/04883 (2022.01); G06N 20/00 (2019.01); G06F 3/04815 (2022.01); G06F 3/04842 (2022.01)
CPC G06F 3/04883 (2013.01) [G06F 3/04815 (2013.01); G06F 3/04842 (2013.01); G06N 20/00 (2019.01)] 17 Claims
OG exemplary drawing
 
1. A computer-implemented method, comprising:
using a user computing device:
displaying, via a user interface, a three-dimensional OD) virtual space and a first 3D object within the 3D virtual space;
detecting a gesture input at a location of the user interface comprising physical contacts at the user interface;
translating the gesture input into a single user interface input for generating a second 3D object in the 3D virtual space by:
generating a two-dimensional (2D) representation of the gesture input comprising a 2D representation of the physical contacts detected at the user interface;
applying a machine learning model to the 2D representation of the gesture input to output a predicted design for the gesture input, wherein the machine learning model determines, for each candidate design of a set of candidate designs, an affinity score representing a probability that the candidate design corresponds to the 2D representation of the gesture input, wherein the predicted design corresponds to a particular candidate design of the set of candidate designs having a greatest determined affinity score,
wherein the predicted design for the gesture input includes design coordinates defining a 2D shape and a line extending from within boundaries of the 2D shape;
generating the single user interface input by mapping the predicted design for the gesture input to the and the location of the gesture input on the user interface, wherein mapping the predicted design to the user interface comprises mapping the predicted design to the user interface based on the design coordinates;
executing, based on the single user interface input comprising the predicted design mapped to the location of the user interface and based on determining that the location of the gesture input is within a threshold distance to a surface of the displayed first 3D object, an operation to add the second 3D object in the 3D virtual space attached to or within the surface of the displayed first 3D object, wherein a length of a first dimension of the second 3D object corresponds to a length of the line;
rendering an updated 3D virtual space displaying the displayed first 3D object and the second 3D object attached to or within the surface of the displayed first 3D object.