US 11,900,536 B2
Visual-inertial positional awareness for autonomous and non-autonomous tracking
Zhe Zhang, Sunnyvale, CA (US); Grace Tsai, Campbell, CA (US); and Shaoshan Liu, Fremont, CA (US)
Assigned to Trifo, Inc., Santa Clara, CA (US)
Filed by Trifo, Inc., Santa Clara, CA (US)
Filed on May 6, 2022, as Appl. No. 17/739,070.
Application 17/739,070 is a continuation of application No. 17/008,299, filed on Aug. 31, 2020, granted, now 11,328,158.
Application 17/008,299 is a continuation of application No. 16/550,143, filed on Aug. 23, 2019, granted, now 10,769,440, issued on Sep. 8, 2020.
Application 16/550,143 is a continuation of application No. 15/942,348, filed on Mar. 30, 2018, granted, now 10,395,117, issued on Aug. 27, 2019.
Application 15/942,348 is a continuation in part of application No. 15/250,393, filed on Aug. 29, 2016, granted, now 10,043,076, issued on Aug. 7, 2018.
Prior Publication US 2022/0262115 A1, Aug. 18, 2022
This patent is subject to a terminal disclaimer.
Int. Cl. G06T 7/73 (2017.01); G06T 17/05 (2011.01); G06V 20/20 (2022.01); G06V 20/58 (2022.01); G06F 18/20 (2023.01)
CPC G06T 17/05 (2013.01) [G06F 18/29 (2023.01); G06T 7/74 (2017.01); G06V 20/20 (2022.01); G06V 20/58 (2022.01); G06T 2207/10016 (2013.01); G06T 2207/10028 (2013.01); G06T 2207/30244 (2013.01)] 20 Claims
OG exemplary drawing
 
1. A method for building 3D maps from a surrounding scenery, including:
receiving from a first source, first visual information of the surrounding scenery and a position where the first visual information was captured;
classifying at least one of one or more objects from the first visual information of the surrounding scenery into a set of moving objects and a set of non-moving objects;
determining a sparse 3D mapping of object feature points taken from the first visual information of the surrounding scenery from the set of non-moving objects; and
building a first 3D map of object feature points from the sparse 3D mapping of object feature points.