US 12,073,577 B2
Computing a point cloud from stitched images
Simon Saito Haagen Nielsen, Beverly Hills, CA (US); John Christopher Collins, Mico, TX (US); Allan Joseph Evans, Los Angeles, CA (US); Graham Shaw, Redondo Beach, CA (US); and Vikas Gupta, San Francisco, CA (US)
Assigned to Snap Inc., Santa Monica, CA (US)
Filed by Snap Inc., Santa Monica, CA (US)
Filed on May 12, 2022, as Appl. No. 17/743,268.
Application 17/743,268 is a continuation of application No. 16/131,961, filed on Sep. 14, 2018, granted, now 11,348,265.
Claims priority of provisional application 62/559,213, filed on Sep. 15, 2017.
Prior Publication US 2022/0270277 A1, Aug. 25, 2022
This patent is subject to a terminal disclaimer.
Int. Cl. G06K 9/00 (2022.01); B64C 39/02 (2023.01); G06T 3/4038 (2024.01); G06T 7/55 (2017.01); H04N 23/698 (2023.01); B64U 101/30 (2023.01)
CPC G06T 7/55 (2017.01) [B64C 39/024 (2013.01); G06T 3/4038 (2013.01); H04N 23/698 (2023.01); B64U 2101/30 (2023.01); B64U 2201/104 (2023.01); G06T 2207/10028 (2013.01)] 19 Claims
OG exemplary drawing
 
1. A method performed by an apparatus of a drone, the method comprising:
simultaneously capturing a plurality of first images by a first camera sensor a plurality of second images by a second camera sensor, and capturing depth information by a third camera sensor while the drone spins around an axis of the drone, wherein the first camera sensor and the second camera sensor each have a field of view of more than 180 degrees, wherein the plurality of first images has a plurality of overlapping regions with the plurality of second images, wherein the first camera sensor is mounted on the drone a predetermined distance from the second camera sensor, and wherein the third camera sensor is mounted on the drone;
processing, by one or more processors, the plurality of first images and the plurality of second images to create a 360-degree image by stitching the plurality of first images together with the plurality of second images according to a corresponding overlapping region of the plurality of overlapping regions; and
processing, by one or more processors, the plurality of overlapping regions and the captured depth information to generate a 360-degree depth map for the 360-degree image, wherein the plurality of overlapping regions cover a 360-degree range of the 360-degree image.