US 11,700,365 B2
Position detection method, position detection device, and display device
Jun Yang, Markham (CA); and Guoyi Fu, Richmond Hill (CA)
Assigned to Seiko Epson Corporation, Tokyo (JP)
Filed by SEIKO EPSON CORPORATION, Tokyo (JP)
Filed on Feb. 17, 2021, as Appl. No. 17/177,303.
Claims priority of application No. 2020-024060 (JP), filed on Feb. 17, 2020.
Prior Publication US 2021/0258551 A1, Aug. 19, 2021
Int. Cl. H04N 17/00 (2006.01); H04N 9/31 (2006.01); G06F 3/03 (2006.01); G06T 7/174 (2017.01); G06T 7/194 (2017.01); H04N 23/80 (2023.01); G06V 30/142 (2022.01); G06V 30/19 (2022.01); G06V 30/32 (2022.01); G06V 10/82 (2022.01); G06V 10/44 (2022.01); G06V 10/50 (2022.01)
CPC H04N 17/002 (2013.01) [G06F 3/0304 (2013.01); G06T 7/174 (2017.01); G06T 7/194 (2017.01); G06V 10/454 (2022.01); G06V 10/507 (2022.01); G06V 10/82 (2022.01); G06V 30/1423 (2022.01); G06V 30/19173 (2022.01); G06V 30/32 (2022.01); H04N 9/3194 (2013.01); H04N 23/80 (2023.01)] 3 Claims
OG exemplary drawing
 
1. A position detection method of detecting a position in an operation surface pointed by a pointing element, the method comprising:
irradiating with infrared light toward the operation surface;
obtaining a first taken image by imaging the operation surface with a first camera configured to take an image with the infrared light;
obtaining a second taken image by imaging the operation surface with a second camera different in imaging viewpoint from the first camera and configured to take an image with the infrared light;
removing a noise component from the first taken image and the second taken image based on a degree of coincidence in luminance gradient between a first background image and the first taken image, and a degree of coincidence in luminance gradient between a second background image and the second taken image, the first background image being obtained by imaging the operation surface with the first camera when the pointing element does not point the operation surface, and the second background image being obtained by imaging the operation surface with the second camera when the pointing element does not point the operation surface;
converting the first taken image from which the noise component removed into a first converted taken image calibrated with respect to the operation surface;
converting the second taken image from which the noise component removed into a second converted taken image calibrated with respect to the operation surface;
forming a difference image between the first converted taken image and the second converted taken image;
extracting an area in which a disparity amount between the first converted taken image and the second converted taken image is within a predetermined range out of the difference image as a candidate area in which an image of the pointing element is included, wherein the extracting includes
calculating a variant value of pixel values of pixels constituting the difference image,
separating the difference image into a background area and a foreground area which is an area other than the background area based on the variant value, and
extracting the candidate area based on a boundary between the background area and the foreground area and the area in which the disparity amount is within the predetermined range;
detecting a tip position of the pointing element from the candidate area based on a shape of the pointing element; and
determining a pointing position of the pointing element in the operation surface and whether or not the pointing element had contact with the operation surface based on a result of the detecting.