US 11,727,642 B2
Image processing apparatus, image processing method for image processing apparatus, and program
Yuta Nakao, Kanagawa (JP); Nobuho Ikeda, Kanagawa (JP); and Hiroshi Ikeda, Tokyo (JP)
Assigned to SONY CORPORATION, Tokyo (JP)
Appl. No. 16/628,892
Filed by SONY CORPORATION, Tokyo (JP)
PCT Filed May 25, 2018, PCT No. PCT/JP2018/020129
§ 371(c)(1), (2) Date Jan. 6, 2020,
PCT Pub. No. WO2019/012817, PCT Pub. Date Jan. 17, 2019.
Claims priority of application No. 2017-138039 (JP), filed on Jul. 14, 2017; and application No. 2018-079061 (JP), filed on Apr. 17, 2018.
Prior Publication US 2020/0234495 A1, Jul. 23, 2020
Int. Cl. G06T 19/00 (2011.01); G06T 15/20 (2011.01); H04N 21/6587 (2011.01); A63F 13/5258 (2014.01)
CPC G06T 19/003 (2013.01) [A63F 13/5258 (2014.09); G06T 15/20 (2013.01); H04N 21/6587 (2013.01)] 13 Claims
OG exemplary drawing
 
1. An image processing apparatus, comprising:
processing circuitry configured to:
receive captured images from a plurality of imaging devices;
generate a three-dimensional model that represents an imaged imaging object in a three-dimensional space according to the captured images, parameters of the plurality of imaging devices, and triangulation,
a person being targeted as the imaging object, and
the three-dimensional model including detection points representing joints of the person and lines interconnecting the detection points;
detect an orientation of the imaging object at a reference position of the imaging object according to a face orientation and a posture of the three-dimensional model of the imaging object;
set a viewpoint in the three-dimensional space at a predetermined distance from the reference position along a direction of the detected orientation, on a basis of attribute information of the imaging object, the attribute information including a name of the person;
generate an observation image that is a virtual viewpoint image taken from the viewpoint; and
change the viewpoint by following a movement of the imaging object, wherein
in response to the three-dimensional model representing a plurality of imaging objects, the processing circuitry is configured to define lines, using a set of at least three imaging objects, each line passing through two of the at least three imaging objects and through two reference viewpoints outside of a shape defined by the at least three imaging objects and oriented along the each line toward the two imaging objects, and set the viewpoint as a midpoint between the two reference viewpoints nearest to an intersection of two lines of the lines.