US 12,276,507 B2
Indoor navigation method, indoor navigation equipment, and storage medium
Erli Meng, Beijing (CN); and Luting Wang, Beijing (CN)
Assigned to BEIJING XIAOMI MOBILE SOFTWARE CO., LTD., Beijing (CN); and BEIJING XIAOMI PINECONE ELECTRONICS CO., LTD., Beijing (CN)
Filed by Beijing Xiaomi Mobile Software Co., Ltd., Beijing (CN); and Beijing Xiaomi Pinecone Electronics Co., Ltd., Beijing (CN)
Filed on Dec. 21, 2021, as Appl. No. 17/645,449.
Claims priority of application No. 202110668381.0 (CN), filed on Jun. 16, 2021.
Prior Publication US 2022/0404153 A1, Dec. 22, 2022
Int. Cl. G06T 7/73 (2017.01); G01C 21/20 (2006.01); G06T 1/00 (2006.01); G06V 10/426 (2022.01); G06V 10/774 (2022.01); G06V 10/80 (2022.01); G06V 20/00 (2022.01)
CPC G01C 21/206 (2013.01) [G06T 1/0014 (2013.01); G06T 7/73 (2017.01); G06V 10/426 (2022.01); G06V 10/7747 (2022.01); G06V 10/806 (2022.01); G06V 20/36 (2022.01); G06T 2207/20072 (2013.01)] 15 Claims
OG exemplary drawing
 
1. An indoor navigation method, applied to a navigation equipment, wherein the indoor navigation method comprises:
receiving an instruction for navigation, and collecting an environment image;
extracting an instruction room feature and an instruction object feature carried in the instruction, and determining a visual room feature, a visual object feature, and a view angle feature based on the environment image, wherein the instruction room feature is configured to indicate room information obtained from the instruction for navigation, the instruction object feature is configured to indicate object information obtained from the instruction for navigation, the visual room feature is configured to indicate room information obtained from the environment image, the visual object feature is configured to indicate object information obtained from the environment image, and the view angle feature is configured to reflect information carried in an view angle of the environment image;
fusing the instruction object feature and the visual object feature with a first knowledge graph representing an indoor object association relationship to obtain an object feature, and determining a room feature based on the visual room feature and the instruction room feature; and
determining a navigation decision based on the view angle feature, the room feature, and the object feature;
wherein fusing the instruction object feature and the visual object feature with the first knowledge graph representing the indoor object association relationship to obtain the object feature comprises:
extracting an object entity carried in the environment image based on the visual object feature;
constructing a second knowledge graph based on the object entity and the first knowledge graph representing the indoor object association relationship, wherein the second knowledge graph is configured to represent an association relationship between the object entity and a first object entity in the first knowledge graph that has an association relationship with the object entity;
performing multi-step graph convolutional reasoning on the first knowledge graph and the second knowledge graph respectively so as to obtain first knowledge graph reasoning information and second knowledge graph reasoning information;
fusing the first knowledge graph reasoning information with the second knowledge graph reasoning information, and updating the first knowledge graph by using the fused knowledge graph reasoning information;
performing a first feature fusing and reinforcing operation on the instruction object feature based on the second knowledge graph to obtain an enhanced instruction object feature; and
performing a second feature fusing and reinforcing operation on the updated first knowledge graph and the enhanced instruction object feature to obtain the object feature; and
wherein determining the room feature based on the visual room feature and the instruction room feature comprises:
determining a visual room category carried in each of optional view angles based on the visual room feature, and determining an instruction room category carried in each of the optional view angles based on the instruction room feature;
determining a room confidence level of each of the optional view angles based on the visual room category, the instruction room category, and a preset room correlation matrix; and
determining the room feature based on the room confidence level of each of the optional view angles.