US 12,292,283 B2
Imaging range estimation device, imaging range estimation method, and program
Takuma Isoda, Musashino (JP); Hirofumi Noguchi, Musashino (JP); Misao Kataoka, Musashino (JP); and Kyota Hattori, Musashino (JP)
Assigned to Nippon Telegraph and Telephone Corporation, Tokyo (JP)
Appl. No. 17/792,340
Filed by Nippon Telegraph and Telephone Corporation, Tokyo (JP)
PCT Filed Jan. 15, 2020, PCT No. PCT/JP2020/001006
§ 371(c)(1), (2) Date Jul. 12, 2022,
PCT Pub. No. WO2021/144874, PCT Pub. Date Jul. 22, 2021.
Prior Publication US 2023/0049073 A1, Feb. 16, 2023
Int. Cl. G01C 11/06 (2006.01); G06T 7/70 (2017.01)
CPC G01C 11/06 (2013.01) [G06T 7/70 (2017.01)] 9 Claims
OG exemplary drawing
 
1. An imaging range estimation device that estimates an imaging range of a camera device, the imaging range estimation device comprising:
a hardware processor, wherein
the hardware processor is configured to:
generate image data with an object name label added by:
acquiring image data imaged by the camera device, and
performing image analysis to identify a first region occupied by each object displayed in the image data by adding an object name label to the first region;
generate reference data of a reference region including an object name label associated with an object in the reference region by:
setting the reference region by using geographic information, the reference region being a circular region including an estimated position of the camera device at a center and a surrounding region that is within a predetermined distance from the estimated position, the predetermined distance being a maximum imaging distance of the camera device;
calculate a concordance rate by comparing a first feature indicated by the first region of each object name label of the image data with a second feature indicated by a second region of each object name label of the reference data;
estimate the imaging range of the camera device to be a quadrangular region of the reference data that corresponds to the image data;
set an image feature extraction line for extracting the object name label by drawing a line from a bottom side to a top side of the image data;
set the image feature extraction line along at least a left end and a right end of the image data;
set candidate feature extraction lines for extracting the object name label by drawing lines in the circular region of the reference data, each line being drawn in a radial direction from the estimated position of the camera device to an edge of the circular region;
set line segments that are along the candidate feature extraction lines, each line segment being a segment of a candidate feature extraction line and including a start point and an end point on the candidate feature extraction line;
extract, for each image feature extraction line that has been set, a line segment on the reference data that corresponds to the image feature extraction line by:
calculating, for each of the line segments, a concordance rate between a first label proportion of each object name label of the image feature extraction line and a second label proportion of each object name label of the line segment, and
determining a line segment with a highest concordance rate to correspond to the image feature extraction line; and
estimate the imaging range of the camera device to be the quadrangular region whose left and right sides are the extracted line segments that correspond to the image feature extraction lines along the left end and the right end of the image data, wherein
the first label proportion is a proportion of the image feature extraction line that is occupied by an object corresponding to the object name label in relation to the image feature extraction line, and wherein
the second label proportion is a proportion of the line segment that is occupied by an object corresponding to the object name label in relation to the line segment.