US 12,332,432 B2
Tracking apparatus, method, and non-transitory computer readable storage medium thereof
Jyun-Jhong Lin, Taoyuan (TW); and Chun-Kai Huang, Taoyuan (TW)
Assigned to HTC Corporation, Taoyuan (TW)
Filed by HTC Corporation, Taoyuan (TW)
Filed on Jul. 6, 2022, as Appl. No. 17/810,837.
Prior Publication US 2024/0012238 A1, Jan. 11, 2024
Int. Cl. G02B 27/00 (2006.01); G02B 27/01 (2006.01); G06T 7/70 (2017.01)
CPC G02B 27/0093 (2013.01) [G02B 27/017 (2013.01); G06T 7/70 (2017.01)] 12 Claims
OG exemplary drawing
 
1. A tracking apparatus, comprising:
an image capturing device, being configured to generate a real-time image corresponding to a field of view; and
a processor, being electrically connected to the image capturing device, and being configured to perform operations comprising:
calculating a first pose information corresponding to a target object based on the real-time image in response to the target object appears in the field of view, wherein the target object is tracked by a head-mounted display; and
transmitting the first pose information to the head-mounted display to make the head-mounted display calculate a fusion pose information of the target object according to the first pose information;
wherein the head-mounted display is further configured to receive a series of inertial measurement parameters corresponding to the target object, and configured to calculate the fusion pose information according to the first pose information and the series of inertial measurement parameters, the series of inertial measurement parameters are generated by an inertial measurement unit installed on the target object, the series of inertial measurement parameters are used to detect a finer moving of the target object, and a return frequency of the series of inertial measurement parameters is greater than the return frequency of the first pose information;
wherein the processor is further configured to perform following operations:
receiving a second pose information corresponding to the target object from the head-mounted display before determining whether the target object tracked by the head-mounted display appears in the field of view, wherein the second pose information comprises a pose information of the target object in a previous time period; and
determining whether the target object tracked by the head-mounted display appears in the field of view corresponding to the image capturing device comprised in the tracking apparatus based on the second pose information and the real-time image, wherein the target object is manipulated by a user using the head-mounted display, the target object is a handheld controller, a wearable controller, or a user's hand contour, and the head-mounted display and the target object correspond to the same user;
wherein the tracking apparatus further comprises:
a storage, being electrically connected to the processor, and being configured to store a spatial map;
wherein the processor is further configured to perform the following operations:
receiving a map packet from the head-mounted display; and
calibrating the spatial map based on the map packet to align the spatial map with a spatial coordinate corresponding to the head-mounted display.
 
6. A tracking method, being adapted for use in an electronic apparatus, wherein the electronic apparatus comprises an image capturing device and a processor, the image capturing device is configured to generate a real-time image corresponding to a field of view, and the tracking method comprises:
calculating a first pose information corresponding to a target object based on the real-time image in response to the target object appears in the field of view, wherein the target object is tracked by a head-mounted display; and
transmitting the first pose information to the head-mounted display to make the head-mounted display calculate a fusion pose information of the target object according to the first pose information;
wherein the head-mounted display is further configured to receive a series of inertial measurement parameters corresponding to the target object, and configured to calculate the fusion pose information according to the first pose information and the series of inertial measurement parameters, the series of inertial measurement parameters are generated by an inertial measurement unit installed on the target object, the series of inertial measurement parameters are used to detect a finer moving of the target object, and a return frequency of the series of inertial measurement parameters is greater than the return frequency of the first pose information;
wherein the tracking method further comprises following steps:
receiving a second pose information corresponding to the target object from the head-mounted display before determining whether the target object tracked by the head-mounted display appears in the field of view, wherein the second pose information comprises a pose information of the target object in a previous time period; and
determining whether the target object tracked by the head-mounted display appears in the field of view corresponding to the image capturing device comprised in the electronic apparatus based on the second pose information and the real-time image, wherein the target object is manipulated by a user using the head-mounted display, the target object is a handheld controller, a wearable controller, or a user's hand contour, and the head-mounted display and the target object correspond to the same user;
wherein the electronic apparatus further comprises a storage, the storage is configured to store a spatial map, and the tracking method further comprises the following steps:
receiving a map packet from the head-mounted display; and
calibrating the spatial map based on the map packet to align the spatial map with a spatial coordinate corresponding to the head-mounted display.
 
11. A non-transitory computer readable storage medium, having a computer program stored therein, wherein the computer program comprises a plurality of codes, the computer program executes a tracking method after being loaded into an electronic apparatus, the electronic apparatus comprises an image capturing device and a processor, the image capturing device is configured to generate a real-time image corresponding to a field of view, the tracking method comprises:
calculating a first pose information corresponding to a target object based on the real-time image in response to the target object appears in the field of view, wherein the target object is tracked by a head-mounted display; and
transmitting the first pose information to the head-mounted display to make the head-mounted display calculate a fusion pose information of the target object according to the first pose information;
wherein the head-mounted display is further configured to receive a series of inertial measurement parameters corresponding to the target object, and configured to calculate the fusion pose information according to the first pose information and the series of inertial measurement parameters, the series of inertial measurement parameters are generated by an inertial measurement unit installed on the target object, the series of inertial measurement parameters are used to detect a finer moving of the target object, and a return frequency of the series of inertial measurement parameters is greater than the return frequency of the first pose information;
wherein the tracking method further comprises following steps:
receiving a second pose information corresponding to the target object from the head-mounted display before determining whether the target object tracked by the head-mounted display appears in the field of view, wherein the second pose information comprises a pose information of the target object in a previous time period; and
determining whether the target object tracked by the head-mounted display appears in the field of view corresponding to the image capturing device comprised in the electronic apparatus based on the second pose information and the real-time image, wherein the target object is manipulated by a user using the head-mounted display, the target object is a handheld controller, a wearable controller, or a user's hand contour, and the head-mounted display and the target object correspond to the same user;
wherein the electronic apparatus further comprises a storage, the storage is configured to store a spatial map, and the tracking method further comprises the following steps:
receiving a map packet from the head-mounted display; and
calibrating the spatial map based on the map packet to align the spatial map with a spatial coordinate corresponding to the head-mounted display.