| CPC G06T 17/205 (2013.01) [G06T 7/11 (2017.01); G06T 7/50 (2017.01); G06T 7/70 (2017.01); G06T 11/60 (2013.01); G06V 20/70 (2022.01); G06T 2200/08 (2013.01); G06T 2200/24 (2013.01); G06T 2207/20021 (2013.01); G06T 2207/20084 (2013.01); G06T 2207/20228 (2013.01)] | 20 Claims |

|
1. A method comprising:
generating, by at least one processor utilizing one or more neural networks, a three-dimensional mesh by determining displacement of vertices of a tessellation of a two-dimensional image based on pixel depth values of the two-dimensional image;
segmenting, by the at least one processor utilizing the one or more neural networks, the three-dimensional mesh into a plurality of three-dimensional object meshes corresponding to a plurality of distinct objects of the two-dimensional image;
modifying, by the at least one processor in response to a displacement input to the two-dimensional image within a graphical user interface displaying the two-dimensional image, a selected three-dimensional object mesh of the plurality of three-dimensional object meshes based on a displaced portion of the selected three-dimensional object mesh corresponding to the displacement input to the two-dimensional image; and
generating, by the at least one processor, a modified two-dimensional image comprising at least one modified portion according to the displaced portion of the selected three-dimensional object mesh.
|