US 11,748,990 B2
Object ingestion and recognition systems and methods
Kamil Wnuk, Playa del Rey, CA (US); David McKinnon, San Francisco, CA (US); Jeremi Sudol, New York, NY (US); Bing Song, La Canada, CA (US); and Matheen Siddiqui, Culver City, CA (US)
Assigned to Nant Holdings IP, LLC, Culver City, CA (US)
Filed by Nant Holdings IP, LLC, Culver City, CA (US)
Filed on Jun. 1, 2022, as Appl. No. 17/830,252.
Application 17/830,252 is a continuation of application No. 17/040,000, filed on Sep. 30, 2020, granted, now 11,380,080.
Application 17/040,000 is a continuation of application No. 16/123,764, filed on Sep. 6, 2018, granted, now 10,832,075, issued on Nov. 10, 2020.
Application 16/123,764 is a continuation of application No. 15/297,053, filed on Oct. 18, 2016, granted, now 10,095,945, issued on Oct. 9, 2018.
Application 15/297,053 is a continuation of application No. 14/623,435, filed on Feb. 16, 2015, granted, now 9,501,498, issued on Nov. 22, 2016.
Claims priority of provisional application 61/940,320, filed on Feb. 14, 2014.
Prior Publication US 2022/0292804 A1, Sep. 15, 2022
This patent is subject to a terminal disclaimer.
Int. Cl. G06V 20/40 (2022.01); G06T 7/13 (2017.01); G06V 20/64 (2022.01); G06T 7/60 (2017.01); G06F 16/532 (2019.01); G06F 16/583 (2019.01); G06F 16/58 (2019.01)
CPC G06V 20/46 (2022.01) [G06F 16/532 (2019.01); G06F 16/5838 (2019.01); G06F 16/5854 (2019.01); G06F 16/5866 (2019.01); G06T 7/13 (2017.01); G06T 7/60 (2013.01); G06V 20/64 (2022.01); G06T 2207/20061 (2013.01); G06T 2207/20116 (2013.01)] 24 Claims
OG exemplary drawing
 
1. An object recognition and ingestion system, comprising:
at least one non-transitory computer readable memory storing executable object recognition and ingestion software instructions; and
at least one processor coupled with the at least one non-transitory computer readable memory that, upon execution of the object recognition and ingestion software instructions, performs operations to:
obtain a digital representation of a scene, wherein the digital representation is obtained from at least one sensor and further includes image data of at least one three-dimensional object and location information;
obtain a result set of shape objects from a set of one or more candidate shape objects, wherein the result set includes at least one shape object from the set of one or more candidate shape objects and that has at least one shape attribute satisfying selection criteria determined from geometrical information of the at least one three-dimensional object derived from the image data of the at least one three-dimensional object;
select at least one target shape object from the result set of shape objects based on a context and at least one point-of-view associated with the at least one three-dimensional object;
instantiate at least one three-dimensional object model of the at least one three-dimensional object from the at least one target shape object and the image data; and
store, in an object recognition database, a bundle of recognition parameters derived from the object model and location information, wherein the recognition parameters enable a computing device to recognize the at least one three-dimensional object.