US 12,243,172 B2
Image processing apparatus and image processing method
Masaki Fukuchi, Tokyo (JP); Kenichirou Ooi, Kanagawa (JP); and Tatsuki Kashitani, Tokyo (JP)
Assigned to SONY GROUP CORPORATION, Tokyo (JP)
Filed by SONY GROUP CORPORATION, Tokyo (JP)
Filed on Aug. 31, 2022, as Appl. No. 17/823,668.
Application 17/823,668 is a continuation of application No. 17/181,051, filed on Feb. 22, 2021, granted, now 11,468,648.
Application 17/181,051 is a continuation of application No. 15/816,500, filed on Nov. 17, 2017, granted, now 10,950,053, issued on Mar. 16, 2021.
Application 15/816,500 is a continuation of application No. 15/384,754, filed on Dec. 20, 2016, granted, now 9,842,435, issued on Dec. 12, 2017.
Application 15/384,754 is a continuation of application No. 15/162,246, filed on May 23, 2016, granted, now 9,552,677, issued on Jan. 24, 2017.
Application 15/162,246 is a continuation of application No. 14/391,874, granted, now 9,373,196, issued on Jun. 21, 2016, previously published as PCT/JP2013/002059, filed on Mar. 26, 2013.
Claims priority of application No. 2012-097714 (JP), filed on Apr. 23, 2012.
Prior Publication US 2022/0414993 A1, Dec. 29, 2022
This patent is subject to a terminal disclaimer.
Int. Cl. G06T 19/00 (2011.01); G06F 3/01 (2006.01); G06F 18/24 (2023.01); G06T 7/73 (2017.01); G06V 10/75 (2022.01); G06V 20/20 (2022.01); H04N 7/18 (2006.01)
CPC G06T 19/006 (2013.01) [G06F 3/011 (2013.01); G06F 18/24 (2023.01); G06T 7/73 (2017.01); G06V 10/757 (2022.01); G06V 20/20 (2022.01); H04N 7/18 (2013.01); G06T 2207/10016 (2013.01)] 24 Claims
OG exemplary drawing
 
1. An information processing apparatus comprising:
circuitry configured to:
receive an image of a real space captured by an image capturing device,
detect a feature point based on the image,
determine a relative position of an object within the real space based on the image and the feature point, and
generate environment information based on the image and the feature point,
determine an explored area or an un-explored area based on the environment information, and
control a display to
display a first virtual object that provides a navigation instruction that instructs a user to explore the un-explored area containing less environment information than that of the explored area, or
display a second virtual object that provides a navigation instruction that instructs a user to avoid activities outside the explored area.