US 11,854,026 B2
System and methods for measuring values of perception variables
Daniel Protz, Zurich (CH)
Assigned to Smart Sensory Analytics LLC, West End, NC (US)
Appl. No. 16/628,221
Filed by Smart Sensory Analytics LLC, West End, NC (US)
PCT Filed Jul. 13, 2018, PCT No. PCT/US2018/041914
§ 371(c)(1), (2) Date Jan. 2, 2020,
PCT Pub. No. WO2019/014508, PCT Pub. Date Jan. 17, 2019.
Claims priority of application No. PCT/EP2017/067643 (WO), filed on Jul. 12, 2017.
Prior Publication US 2021/0073836 A1, Mar. 11, 2021
Int. Cl. G06Q 30/0203 (2023.01); G06F 16/583 (2019.01); G06N 20/00 (2019.01); G06F 3/0482 (2013.01); G06Q 30/0201 (2023.01); G06N 7/01 (2023.01)
CPC G06Q 30/0203 (2013.01) [G06F 3/0482 (2013.01); G06F 16/5838 (2019.01); G06N 7/01 (2023.01); G06N 20/00 (2019.01); G06Q 30/0201 (2013.01)] 17 Claims
OG exemplary drawing
 
1. A computer-implemented method of rendering, in a multi-dimensional descriptor space, and using descriptor variables capable of characterizing a predetermined stimulus, a descriptor profile of the stimulus, the method comprising:
a first step of providing a plurality of different descriptor pairs from a host device to a user device of each of a plurality of users, wherein each pair comprises descriptors representing two different dimensions of the said descriptor space;
a second step of, for said each user, using a graphical user interface on an output device of the user device to present a textual prompt and one of the descriptor pairs, of said plurality of different descriptor pairs provided in the first step, to the user, wherein each member of the descriptor pair is presented adjacent to the other on the graphical user interface, wherein each member is presented in a separate selectable area of the graphical user interface;
a third step of, for said each user, the user indicates, by interacting with the selectable area associated with the members of the presented pair using the graphical user interface of the input device of the user device, which member of the presented pair more closely represents the user's perception of the stimulus, so that the graphical user interface of the user device receives the user's selection of the member of the presented descriptor pair which more closely represents the user's perception of the stimulus, whereupon receipt of the user's indication, the descriptor pair is removed from the graphical user interface;
repeating the second and third steps successively for each of the descriptor pairs of said plurality of different descriptor pairs provided in the first step, so that second and third steps are carried out of each of the plurality of different descriptor pairs provided in the first step;
a fourth step of using a network interface of the user device to communicate the user selections to the host device;
a fifth step of creating a training set using previous user selection patterns, wherein the training set establishes a set of predetermined filter rules for categorizing user selections;
a sixth step of training a machine learning algorithm using the training set, wherein the machine learning algorithm is iteratively trained until it can detect when user selections reach a predetermined level of convergence;
a seventh step of filtering, using the trained machine learning algorithm, by reference to the set of predetermined filter rules, one or more of the received selections which fall outside a predetermined range, and omitting the one or more identified selections; and
an eighth step of using normalizing processor of the host device to calculate, for each of the descriptors received from the user devices, a descriptor value in said perception space, and to normalize the value of each descriptor against the other descriptor values so as to generate a normalized perception profile of the predetermined stimulus in the descriptor space.