US 12,329,452 B2
Ophthalmic apparatus, method for controlling ophthalmic apparatus, and computer-readable medium
Mitsuhiro Ono, Tokyo (JP); and Ritsuya Tomita, Kanagawa (JP)
Assigned to Canon Kabushiki Kaisha, Tokyo (JP)
Filed by CANON KABUSHIKI KAISHA, Tokyo (JP)
Filed on Jan. 31, 2022, as Appl. No. 17/588,367.
Application 17/588,367 is a continuation of application No. PCT/JP2020/029310, filed on Jul. 30, 2020.
Claims priority of application No. 2019-147940 (JP), filed on Aug. 9, 2019; application No. 2019-234950 (JP), filed on Dec. 25, 2019; and application No. 2020-046233 (JP), filed on Mar. 17, 2020.
Prior Publication US 2022/0151483 A1, May 19, 2022
Int. Cl. A61B 3/00 (2006.01); A61B 3/10 (2006.01); A61B 3/103 (2006.01); A61B 3/12 (2006.01); A61B 3/14 (2006.01); G06F 18/214 (2023.01); G06T 7/00 (2017.01); G06T 11/00 (2006.01)
CPC A61B 3/0083 (2013.01) [A61B 3/0025 (2013.01); A61B 3/0058 (2013.01); A61B 3/102 (2013.01); A61B 3/103 (2013.01); A61B 3/12 (2013.01); A61B 3/14 (2013.01); G06F 18/214 (2023.01); G06T 7/0014 (2013.01); G06T 11/00 (2013.01); G06T 2207/20081 (2013.01); G06T 2207/30041 (2013.01)] 18 Claims
OG exemplary drawing
 
1. An ophthalmic apparatus comprising:
a first optical head unit including an optical system arranged to irradiate a first eye to be examined with light and to detect return light from the first eye to be examined;
an information obtaining unit configured to, using a first image relating to the first eye to be examined that is obtained using the first optical head unit as input data of a learned model, the learned model having been obtained by using (1) a second image relating to a second eye to be examined, the second image having been obtained using a second optical head unit and (2) information of a position relating to at least one of the second eye to be examined and the second optical head unit, obtain, as output data from the learned model, information of a position relating to at least one of the first eye to be examined and the first optical head unit; and
a drive controlling unit configured to control driving of at least one of a supporter arranged to support a face of a subject and the first optical head unit,
wherein the drive controlling unit is configured to control the driving of the at least one of the supporter and the first optical head unit based on the obtained information of the position to cause at least one of the first eye to be examined and the first optical head unit to move to the position,
wherein the first image relating to the first eye to be examined includes a first image relating to a fundus of the first eye to be examined,
wherein the second image relating to the second eye to be examined includes a second image relating to a fundus of the second eye to be examined,
wherein the information obtaining unit is configured to, using the first image relating to the fundus of the first eye to be examined that is obtained using the first optical head unit as input data of the learned model, the learning model having been obtained by using (a) the second image relating to the fundus of the second eye to be examined, the second image having been obtained using the second optical head unit, (b) information of a position relating to at least one of the second eye to be examined and the second optical head unit, (c) information of a position relating to a focusing optical system, and (d) information of a position relating to an coherence gate, obtain, as output data from the learned model, information of positions relating to (a) at least one of the first eye to be examined and the first optical head unit, (b) the focusing optical system, and (c) the coherence gate, and
wherein the drive controlling unit is configured to adjust an arrangement of at least one of the supporter and the first optical head unit, the focusing optical system, and the coherence gate based on the obtained information of the positions relating to (a) at least one of the first eye to be examined and the first optical head unit, (b) the focusing optical system, and (c) the coherence gate.