US 12,190,595 B2
Information processing apparatus, information processing system, and information processing method
Seungha Yang, Tokyo (JP); and Ryuta Satoh, Tokyo (JP)
Assigned to SONY GROUP CORPORATION, Tokyo (JP)
Appl. No. 17/292,345
Filed by SONY GROUP CORPORATION, Tokyo (JP)
PCT Filed Oct. 23, 2019, PCT No. PCT/JP2019/041483
§ 371(c)(1), (2) Date May 7, 2021,
PCT Pub. No. WO2020/100540, PCT Pub. Date May 22, 2020.
Claims priority of application No. 2018-214754 (JP), filed on Nov. 15, 2018.
Prior Publication US 2022/0004777 A1, Jan. 6, 2022
Int. Cl. G06K 9/00 (2022.01); B60W 40/02 (2006.01); G01S 19/20 (2010.01); G06F 18/22 (2023.01); G06K 9/20 (2006.01); G06K 9/62 (2022.01); G06V 10/22 (2022.01); G06V 20/56 (2022.01); G08G 1/16 (2006.01); H04W 4/02 (2018.01); B60W 30/085 (2012.01); B60W 30/095 (2012.01)
CPC G06V 20/56 (2022.01) [B60W 40/02 (2013.01); G01S 19/20 (2013.01); G06F 18/22 (2023.01); G06V 10/22 (2022.01); G08G 1/16 (2013.01); H04W 4/02 (2013.01); B60W 30/085 (2013.01); B60W 30/0956 (2013.01); B60W 2420/403 (2013.01); B60W 2556/45 (2020.02)] 13 Claims
OG exemplary drawing
 
1. An information processing apparatus, comprising:
at least one processor configured to:
analyze an image captured by a camera;
execute object identification to identify an object in at least one image region of a plurality of image regions in the captured image;
set a label, as an identification result, to each image region of the plurality of image regions in the captured image based on the object identification;
analyze the set label of each image region of the plurality of image regions in the captured image;
determine a label confidence score for the set label of each image region of the plurality of image regions based on the analysis of the set label of each image region of the plurality of image regions,
wherein the label confidence score indicates a confidence of the object identification for the set label of each image region of the plurality of image regions;
extract a low-confidence region in the captured image based on the label confidence score for the set label of each image region of the plurality of image regions,
wherein a label confidence score of the low-confidence region is lowest among the plurality of image regions;
receive information associated with the object;
analyze an object region associated with the object based on the received information;
update a label of the low-confidence region based a matching rate between the object region analyzed from the received information and the low-confidence region being equal to or greater than a specified threshold value;
extract a high-confidence region in the captured image based on the label confidence score for the set label of each image region of the plurality of image regions,
wherein a label confidence score of the high-confidence region is highest among the plurality of image regions;
calculate a matching rate between the low-confidence region and the high-confidence region; and
update, based on the received information, the label of the high-confidence region based on the matching rate between the low-confidence region and the high-confidence region being equal to or higher than the specified threshold value.