US 11,671,572 B2
Input parameter based image waves
Sagi Katz, Yokneam Ilit (IL); Matan Zohar, Haifa (IL); and Ilya Levin, Haifa (IL)
Assigned to Snap Inc., Santa Monica, CA (US)
Filed by Snap Inc., Santa Monica, CA (US)
Filed on Nov. 15, 2021, as Appl. No. 17/526,136.
Application 17/526,136 is a continuation of application No. 16/658,370, filed on Oct. 21, 2019, granted, now 11,178,375.
Prior Publication US 2022/0078391 A1, Mar. 10, 2022
This patent is subject to a terminal disclaimer.
Int. Cl. H04N 13/111 (2018.01); G02B 27/01 (2006.01); H04N 13/271 (2018.01); G06T 3/00 (2006.01); G06T 7/521 (2017.01)
CPC H04N 13/111 (2018.05) [G02B 27/0176 (2013.01); G06T 3/0056 (2013.01); G06T 3/0093 (2013.01); G06T 7/521 (2017.01); H04N 13/271 (2018.05); G02B 2027/0178 (2013.01)] 20 Claims
OG exemplary drawing
 
1. A virtual input parameter-based wave creation system comprising:
an eyewear device including a depth-capturing camera;
an image display for presenting an initial video including initial images;
an image display driver coupled to the image display to control the image display to present the initial video;
a processor coupled to the depth-capturing camera the processor configured to:
present, via the image display, the initial video;
receive selection of input parameters to apply waves to the presented initial video;
generate, via the depth-capturing camera, a sequence of initial depth images from respective initial images in the initial video, wherein:
each of the initial depth images is associated with a time coordinate on a time (T) axis for a presentation time based on the respective initial images in the initial video;
each of the initial depth images is formed of a matrix of vertices, each vertex representing a sampled 3D location in a respective three-dimensional scene;
each vertex has a position attribute; and
the position attribute of each vertex is based on a three-dimensional location coordinate system and includes an X location coordinate on an X axis for horizontal position, a Y location coordinate on a Y axis for vertical position, and a Z location coordinate on a Z axis for a depth position;
generate, based on the associated time coordinate of each of the initial depth images and the input parameters, for each of the initial depth images, a respective warped wave image by applying a transformation function that is responsive to the input parameters to vertices of the respective initial depth image based on, at least the Y and Z location coordinates, the associated time coordinate, and a parameter of the input parameters;
create, a warped wave video including the sequence of the generated warped wave images; and
present, via the image display, the warped wave video.