US 11,965,744 B2
Systems and methods for indoor positioning
Xiaoqiang Teng, Beijing (CN); Rongzhi Wang, Beijing (CN); Jiankuan Li, Beijing (CN); Zhiwei Ruan, Beijing (CN); Zongyue Liu, Beijing (CN); Jun Zhang, Beijing (CN); Pengfei Xu, Beijing (CN); Shenggang Bao, Beijing (CN); and Chao Liu, Beijing (CN)
Assigned to BEIJING DIDI INFINITY TECHNOLOGY AND DEVELOPMENT CO., LTD., Beijing (CN)
Filed by BEIJING DIDI INFINITY TECHNOLOGY AND DEVELOPMENT CO., LTD., Beijing (CN)
Filed on Nov. 10, 2020, as Appl. No. 17/093,753.
Application 17/093,753 is a continuation of application No. PCT/CN2019/089628, filed on May 31, 2019.
Claims priority of application No. 201810554631.6 (CN), filed on Jun. 1, 2018; and application No. 201810579130.3 (CN), filed on Jun. 7, 2018.
Prior Publication US 2021/0055109 A1, Feb. 25, 2021
Int. Cl. G01C 21/12 (2006.01); G01C 21/16 (2006.01); G01C 21/20 (2006.01); G01C 21/36 (2006.01); G06K 9/46 (2006.01)
CPC G01C 21/206 (2013.01) [G01C 21/12 (2013.01); G01C 21/1656 (2020.08); G01C 21/3602 (2013.01); G01C 21/3632 (2013.01)] 20 Claims
OG exemplary drawing
 
1. A method for indoor positioning, implemented on a computing device having an interface, at least one storage device storing a set of instructions, and at least one processor in communication with the at least one storage device, the method comprising:
determining, by a vision positioning unit or a satellite positioning unit, a first location of a subject at a first time point;
determining, by one or more sensors including an inertial measurement unit (IMU) sensor, one or more motion parameters associated with the subject, wherein the one or more motion parameters associated with the subject at least include a heading direction;
determining, by the at least one processor, based on the first location of the subject and the one or more motion parameters associated with the subject, a second location of the subject at a second time point after the first time point;
obtaining, by a camera, surroundings information associated with the subject at the second time point, wherein the surroundings information at least includes visual information;
determining, by the at least one processor, based on the one or more motion parameters associated with the subject and pose information associated with the surroundings information at the second time point, a confidence level relating to the surroundings information associated with the subject, wherein the pose information at least reflects a real time position and orientation of the subject that carries the camera;
determining, by the at least one processor, a target location of the subject by adjusting, at least based on the confidence level relating to the surroundings information associated with the subject and a confidence level threshold, the second location of the subject;
determining, by the at least one processor, a path from the target location of the subject to a destination of the subject;
determining, by the at least one processor, at least one arrow indicating a direction of the path from the target location to the destination; and
displaying a combination of the at least one arrow and the path from the target location to the destination on an augmented reality (AR) based real-time navigation interface.