US 11,941,864 B2
Image forming apparatus, determination apparatus, image forming method, and non-transitory computer readable medium storing image forming program
Yadong Pan, Tokyo (JP)
Assigned to NEC CORPORATION, Tokyo (JP)
Appl. No. 17/434,109
Filed by NEC Corporation, Tokyo (JP)
PCT Filed Mar. 1, 2019, PCT No. PCT/JP2019/007986
§ 371(c)(1), (2) Date Aug. 26, 2021,
PCT Pub. No. WO2020/178876, PCT Pub. Date Sep. 10, 2020.
Prior Publication US 2022/0139007 A1, May 5, 2022
Int. Cl. G06F 18/2431 (2023.01); G06T 7/11 (2017.01); G06T 7/70 (2017.01); G06T 11/00 (2006.01); G06T 11/20 (2006.01); G06V 10/764 (2022.01); G06V 10/774 (2022.01); G06V 10/82 (2022.01); G06V 40/16 (2022.01); G06V 40/20 (2022.01)
CPC G06V 10/764 (2022.01) [G06F 18/2431 (2023.01); G06T 7/11 (2017.01); G06T 7/70 (2017.01); G06T 11/001 (2013.01); G06T 11/203 (2013.01); G06V 10/7747 (2022.01); G06V 10/82 (2022.01); G06V 40/171 (2022.01); G06V 40/20 (2022.01); G06T 2207/20084 (2013.01); G06T 2207/20132 (2013.01); G06T 2207/30201 (2013.01)] 19 Claims
OG exemplary drawing
 
1. An image forming apparatus comprising:
at least one processor; and
at least one memory storing instructions executable by the at least one processor to:
for each of a plurality of training images of a training target including a first image area including the training target and a second image area surrounding the first image area and not including the training target:
detect, in the first image area of the training image, a plurality of predetermined key points;
select a plurality of line-draw groups in the training image, where each of the line-draw groups includes at least two key points of the plurality of predetermined key points;
form a texture map by drawing, for each of the line-draw groups, a line in the training image passing through the at least two key points included in the each of the line-draw groups and having at least one end that is extended to an end of the training image;
form a feature extracted image based on the texture map; and
train a pose identification neural network based on a pose of the training target in each of the training images and the feature extracted image formed for each of the training images, wherein formation of the feature extracted image based on the texture map that has been formed improves pose identification accuracy in subsequent usage of the pose identification neural network.