US 12,223,607 B2
Mixed reality system, program, mobile terminal device, and method
Shuichi Kurabayashi, Tokyo (JP)
Assigned to CYGAMES, INC., Tokyo (JP)
Filed by CYGAMES, INC., Tokyo (JP)
Filed on Nov. 3, 2023, as Appl. No. 18/501,702.
Application 18/501,702 is a continuation of application No. 17/181,456, filed on Feb. 22, 2021.
Application 17/181,456 is a continuation of application No. PCT/JP2019/032969, filed on Aug. 23, 2019.
Claims priority of application No. 2018-157410 (JP), filed on Aug. 24, 2018.
Prior Publication US 2024/0071016 A1, Feb. 29, 2024
Int. Cl. G06T 19/00 (2011.01); A63F 13/213 (2014.01); A63F 13/25 (2014.01); A63F 13/525 (2014.01); A63F 13/98 (2014.01); G06F 3/01 (2006.01); G06F 3/048 (2013.01); G06T 7/73 (2017.01)
CPC G06T 19/006 (2013.01) [A63F 13/213 (2014.09); A63F 13/25 (2014.09); A63F 13/525 (2014.09); A63F 13/98 (2014.09); G06F 3/01 (2013.01); G06F 3/048 (2013.01); G06T 7/73 (2017.01); G06T 2207/30204 (2013.01)] 12 Claims
OG exemplary drawing
 
1. A mixed reality system for displaying, on a display for displaying a first virtual object to a user present in a prescribed real space, a mixed-reality image in which an image of the first virtual object arranged in a virtual space corresponding to the prescribed real space is superimposed on a photographed image of the prescribed real space, the mixed reality system comprising a mobile terminal device having the display and a photographing device that photographs the prescribed real space, wherein:
the mixed reality system includes a plurality of feature point sets arranged in the prescribed real space, the plurality of feature point sets including identifiable information that allows identification of each of the plurality of feature point sets, and at least three feature point sets among the plurality of feature point sets being arranged so as to have predefined positional relationships; and
the mobile terminal device is configured to:
store data, obtained in advance, of the first virtual object that corresponds to a real object present in the prescribed real space and that defines the virtual space, and data of a second virtual object, in the virtual space, that does not correspond to the real object,
store arrangement position and posture information in the virtual space for each of the plurality of feature point sets arranged in the prescribed real space,
recognize each of the plurality of feature point sets photographed by the photographing device,
determine a plurality of projections using a homography matrix, a template image of a feature point set among the plurality of feature point sets, and a plurality of local features in the photographed image,
wherein the plurality of projections are selected from a group consisting of one or more rotation projections, one or more enlargement projections, one or more reduction projections, and one or more deformation projections,
determine a relative position of the mobile terminal device using the plurality of projections,
determine a viewpoint position of a virtual camera, in the virtual space, corresponding to a position and a photographing direction of the photographing device in the prescribed real space, based on the arrangement position and posture information for each of the plurality of feature point sets, the arrangement position and posture information being obtained from the identifiable information of some or all of the recognized feature point sets, and the relative position and posture information of the mobile terminal device with respect to each of the plurality of feature point sets, the relative position and posture information being determined from shapes and sizes of the plurality of feature point sets, and
determine a first depth distance between the viewpoint position of the virtual camera and the first virtual object using the plurality of feature point sets,
determine a second depth distance between the viewpoint position of the virtual camera and the second virtual object using the plurality of feature point sets,
determine whether the first virtual object is closer to the viewpoint position than the second virtual object based on the first depth distance and the second depth distance,
generate, in response to determining that the second virtual object is closer than the first virtual object and based on the first depth distance of the first virtual object, the second depth distance of the second virtual object, and the viewpoint position, a mixed-reality image in which an image of the second virtual object according to the viewpoint position is superimposed on the photographed image of the prescribed real space,
wherein a first portion of the second virtual object is superimposed on the first virtual object in the mixed-reality image in response to determining that the second virtual object is closer to the viewpoint position than the first virtual object,
wherein a second portion of the first virtual object is not displayed in the mixed-reality image based on the first depth distance of the second portion of the first virtual object being located behind the real object in the prescribed real space,
wherein the plurality of feature point sets comprise five or more of feature point sets that are AR markers, and
wherein the mobile terminal device, at prescribed time intervals, recognizes each of the five or more feature point sets taken by the photographing device to determine the viewpoint position for generating the mixed-reality image.