US 12,254,348 B2
Information processing apparatus, information processing method, and recording medium for performing inference processing using an inference model
Masaki Takahashi, Osaka (JP); Yohei Nakata, Osaka (JP); Yasunori Ishii, Osaka (JP); and Tomoyuki Okuno, Osaka (JP)
Assigned to PANASONIC INTELLECTUAL PROPERTY MANAGEMENT CO., LTD., Osaka (JP)
Filed by Panasonic Intellectual Property Management Co., Ltd., Osaka (JP)
Filed on Dec. 29, 2022, as Appl. No. 18/090,639.
Application 18/090,639 is a continuation of application No. PCT/JP2021/019553, filed on May 24, 2021.
Claims priority of application No. 2020-119205 (JP), filed on Jul. 10, 2020.
Prior Publication US 2023/0133989 A1, May 4, 2023
Int. Cl. G06F 9/48 (2006.01)
CPC G06F 9/4881 (2013.01) 15 Claims
OG exemplary drawing
 
1. An information processing apparatus comprising:
an obtainer that obtains sensing data;
an inference processing unit that inputs the sensing data into an inference model to obtain a result of inference and information on a processing time for a plurality of subsequent tasks to processing performed by the inference model;
a determiner that determines a task schedule for a task processing unit that processes the plurality of subsequent tasks to process the plurality of subsequent tasks on a basis of the information on the processing time for the plurality of subsequent tasks; and
a controller that inputs the result of the inference into the task processing unit to process the plurality of subsequent tasks according to the task schedule determined, wherein
the result of the inference is a feature value of the sensing data output from a feature classifier,
the information on the processing time for the plurality of subsequent tasks is determined by estimating the processing time for the plurality of subsequent tasks from the feature value,
the feature classifier outputs the feature value to a delay flag classifier, and
the delay flag classifier outputs delay flag information indicating a time from when the feature value is input into each of a plurality of neural networks included in the task processing unit until when processing of the plurality of subsequent tasks ends.