US 12,230,029 B2
Wearable multimedia device and cloud computing platform with laser projection system
Imran A. Chaudhri, San Francisco, CA (US); Patrick Gates, San Francisco, CA (US); Monique Relova, South San Francisco, CA (US); Bethany Bongiorno, San Francisco, CA (US); Brian Huppi, San Francisco, CA (US); and Shahzad Chaudhri, Arlington, VA (US)
Assigned to Humane, Inc., San Francisco, CA (US)
Filed by Humane, Inc., San Francisco, CA (US)
Filed on Jun. 17, 2020, as Appl. No. 16/904,544.
Application 16/904,544 is a continuation in part of application No. 15/976,632, filed on May 10, 2018, granted, now 10,924,651.
Claims priority of provisional application 62/863,222, filed on Jun. 18, 2019.
Claims priority of provisional application 62/504,488, filed on May 10, 2017.
Prior Publication US 2021/0117680 A1, Apr. 22, 2021
Int. Cl. G06V 20/20 (2022.01); G06F 3/01 (2006.01); H04N 7/18 (2006.01)
CPC G06V 20/20 (2022.01) [G06F 3/017 (2013.01); H04N 7/18 (2013.01)] 17 Claims
OG exemplary drawing
 
1. A body-worn apparatus comprising:
a camera;
a depth sensor;
a laser projection system;
one or more processors;
memory storing instructions that when executed by the one or more processors, cause the one or more processors to perform operations comprising:
capturing, using the camera, a first set of digital images;
identifying a real-world object in the first set of digital images;
capturing, using the depth sensor, first depth data;
identifying, in the first set of digital images and the first depth data, a first gesture of a user wearing the apparatus,
wherein identifying the real-world object and the first gesture includes:
processing the first set of digital images through an object detection framework that uses a complex polygon to identify a hotspot region in the first set of images, wherein the hotspot region is smaller than the entire image and captures the first gesture and the real-world object while excluding all other objects in the first set of images;
sending, to a cloud computing platform, the hotspot region;
receiving, from the cloud computing platform, information related to the real-world object; and
projecting, with the laser projection system, at least some of the information on a surface.