US 12,353,616 B2
Collecting of points of interest on web-pages by eye-tracking
Navid Hajimirza, London (GB); and Andrew Walker, London (GB)
Assigned to Lumen Research Ltd., London (GB)
Appl. No. 17/250,972
Filed by Lumen Research Ltd, London (GB)
PCT Filed Oct. 3, 2019, PCT No. PCT/GB2019/052800
§ 371(c)(1), (2) Date Apr. 5, 2021,
PCT Pub. No. WO2020/070509, PCT Pub. Date Apr. 9, 2020.
Claims priority of application No. 1816158 (GB), filed on Oct. 3, 2018.
Prior Publication US 2021/0349531 A1, Nov. 11, 2021
Int. Cl. G06F 3/01 (2006.01); G06F 40/143 (2020.01); H04L 67/50 (2022.01)
CPC G06F 3/013 (2013.01) [G06F 40/143 (2020.01); H04L 67/535 (2022.05)] 18 Claims
OG exemplary drawing
 
1. A computer-implemented method for collecting visual attention information in real-time, comprising:
displaying a browser window on a display, wherein the browser window is defined by a Document Object Model (DOM);
generating a stream of estimated gaze points, wherein each estimated gaze point of the stream of estimated gaze points corresponds to an estimate of a user's gaze point within the display; and
as the stream of estimated gaze points is being generated, for each estimated gaze point in the stream of estimated gaze points:
transforming a geometry of the estimated gaze point and/or a geometry of the browser window to a common coordinate system;
identifying an object within the DOM using the estimated gaze point, wherein a location of the identified object in the browser window corresponds to a location of the estimated gaze point;
determining that the identified object comprises dynamic content rendered in real-time;
capturing at least one screenshot of the identified object;
converting the captured screenshot to a universally unique signature;
extracting, from the DOM, the identified object and data corresponding to the identified object, wherein the data corresponding to the identified object comprises one or more properties indicative of how the identified object should be rendered in the browser window; and
storing the estimated gaze point in conjunction with the identified object, the data corresponding to the identified object, the captured screenshot of the identified object and the universally unique signature, wherein the stored universally unique signature acts as an identifier for the identified object.