US 12,067,737 B2
Pavement macrotexture determination using multi-view smartphone images
Jie Shan, West Lafayette, IN (US); and Xiangxi Tian, West Lafayette, IN (US)
Assigned to Purdue Research Foundation, West Lafayette, IN (US)
Filed by Purdue Research Foundation, West Lafayette, IN (US)
Filed on May 4, 2023, as Appl. No. 18/143,524.
Application 18/143,524 is a continuation of application No. 17/201,051, filed on Mar. 15, 2021, granted, now 11,645,769.
Claims priority of provisional application 62/989,670, filed on Mar. 14, 2020.
Prior Publication US 2023/0274450 A1, Aug. 31, 2023
This patent is subject to a terminal disclaimer.
Int. Cl. G06T 7/41 (2017.01); G06T 7/593 (2017.01); G06V 30/18 (2022.01)
CPC G06T 7/41 (2017.01) [G06T 7/593 (2017.01); G06T 7/596 (2017.01); G06V 30/18143 (2022.01); G06T 2207/10021 (2013.01); G06T 2207/10028 (2013.01); G06T 2207/20021 (2013.01)] 20 Claims
OG exemplary drawing
 
1. A method of determining macrotexture of an object, comprising:
obtaining a plurality of stereo images from an object collected from a plurality of angles by an imaging device;
generating a coordinate system for each image of the plurality of stereo images;
detecting one or more keypoints each having a coordinate in each image of the plurality of stereo images, wherein the coordinate system is based on a plurality of ground control points (GCPs) with apriori position knowledge of each of the plurality of GCPs;
generating a sparse point cloud based on the one or more keypoints;
reconstructing a 3D dense point cloud of the object based on the generated sparse point cloud and based on neighboring pixels of each of the one or more keypoints and calculating the coordinates of each pixel of the 3D dense point cloud;
obtaining the macrotexture based on the reconstructed 3D dense point cloud of the object, comprising:
dividing the 3D dense point cloud into a plurality of segments based on a first predetermined distance criterion;
subdividing each divided segment of the plurality of segments into a plurality of subdivided segments based on a second predetermined distance criterion;
determining peaks and valleys for each subdivided segment of the plurality of segments;
determining the maximum peak in each subdivided segment of the plurality of subdivided segments of each segment of the plurality of segments;
obtaining a mean segment depth by averaging the maximum peaks for each subdivided segment of the plurality of subdivided segments of each segment of the plurality of segments; and
obtaining a mean profile depth of the object by averaging the mean segment depths for each segment of the plurality of segments.