US 12,136,052 B2
Verification of progression of construction-related activity at given location
Mohammad Mostafa Soltani, North York (CA); Dan Park, Eastvale, CA (US); Kevin McKee, Del Mar, CA (US); and Matt Man, Thornhill (CA)
Assigned to Procore Technologies, Inc., Carpinteria, CA (US)
Filed by Procore Technologies, Inc., Carpinteria, CA (US)
Filed on Aug. 25, 2022, as Appl. No. 17/895,556.
Prior Publication US 2024/0070573 A1, Feb. 29, 2024
Int. Cl. G06Q 10/00 (2023.01); G06Q 10/0631 (2023.01); G06Q 10/0633 (2023.01); G06Q 50/00 (2024.01); G06V 10/00 (2022.01); G06Q 50/08 (2012.01)
CPC G06Q 10/06311 (2013.01) [G06Q 10/0633 (2013.01); G06V 10/00 (2022.01); G06Q 50/08 (2013.01)] 19 Claims
OG exemplary drawing
 
1. A computing platform comprising:
a communication interface;
at least one processor;
at least one non-transitory computer-readable medium; and
program instructions stored on the at least one non-transitory computer-readable medium that are executable by the at least one processor to cause the computing platform to:
receive, from a first client station, (i) data related to the first client station and (ii) a first image associated with a target location, wherein the data related to the first client station comprises audio-visual data of the first client station comprising camera data of the first client station and light detection and ranging (LiDAR) scanner data of the first client station;
use a Visual Odometry based (VO-based) positioning machine learning model to output a VO-based positioning location estimate based on at least a portion of the audio-visual data;
based at least on the VO-based positioning location estimate, determine a location signature associated with the first image;
determine that the location signature associated with the first image has a threshold level of similarity to a location signature associated with a second image that is associated with the target location;
evaluate at least the first image to determine progression of a construction-related activity at the target location;
based on the evaluation of at least the first image, determine that the construction-related activity at the target location has progressed a threshold amount;
in response to (i) the determination that the location signature associated with the first image has the threshold level of similarity to the location signature associated with the second image and (ii) the determination that the construction-related activity at the target location has progressed the threshold amount, transmit, to a second client station, a communication related to progression of the construction-related activity and thereby cause an indication that the construction-related activity at the target location has progressed the threshold amount to be presented at a user interface of the second client station;
use the VO-based positioning machine learning model to output a first VO-based positioning location estimate based on the camera data;
use the VO-based positioning machine learning model to output a second VO-based positioning location estimate based on the LiDAR scanner data;
compare the first VO-based positioning location estimate to the second VO-based positioning location estimate to identify a difference between the estimates; and
use the identified difference as training data to retrain the VO-based positioning machine learning model for outputting VO-based positioning location estimates based on camera data.