US 12,229,897 B2
Intelligent dynamic rendering of augmented reality environment
Shailendra Singh, Thane West (IN)
Assigned to Bank of America Corporation, Charlotte, NC (US)
Filed by Bank of America Corporation, Charlotte, NC (US)
Filed on Mar. 10, 2022, as Appl. No. 17/691,203.
Prior Publication US 2023/0290076 A1, Sep. 14, 2023
Int. Cl. G06T 19/00 (2011.01); G06V 10/764 (2022.01); G06V 20/20 (2022.01); H04W 4/80 (2018.01)
CPC G06T 19/006 (2013.01) [G06V 10/764 (2022.01); G06V 20/20 (2022.01); H04W 4/80 (2018.02)] 13 Claims
OG exemplary drawing
 
1. A method for integrating an augmented reality overlay with a view of a physical environment using deep learning, the method comprising an augmented reality device:
receiving a beacon signal transmitting a text-based offer message;
filtering the beacon signal based on a user interest;
capturing a view of a physical environment;
using one or more deep learning algorithms, identifying one or more objects in the physical environment; and
rendering an augmented reality overlay comprising graphics and/or text, the overlay content based on the beacon signal and the user interest, and the overlay position based on the identified objects;
wherein the filtering comprises:
using natural language processing, extracting content from the beacon message;
using machine learning:
classifying the beacon message based on the content of the message; and
labeling the beacon message based on the user interest;
pairing the beacon and the augmented reality device; and
inputting the labeled message to a rendering engine; and
wherein the method further comprises the augmented reality device:
capturing a series of image frames from a single user vantage point at a series of timestamps, the image frames comprising a first image frame at a first timestamp and a second image frame at a second timestamp;
generating a baseline spatial frame from the first image;
at an event monitoring engine, detecting a shift between the first image frame captured from a first user vantage point at the first timestamp and the second image frame captured from a different user vantage point at the second timestamp, the detecting comprising:
triggering deep learning algorithms comprising a generative adversarial network (GAN) and a recurrent neural network (RNN) based at least in part on a difference between the first image frame captured from the single user vantage point at the first timestamp and the second image frame captured from the single user vantage point at the second timestamp; and
at the GAN and the RNN, extracting an image feature comprising an object in the first image frame captured from the user vantage point at the first timestamp and an image feature comprising the object in the second image frame captured from the user vantage point at the second timestamp; and
based on output from the GAN and the RNN:
rendering a new object in the baseline spatial frame; and
modifying the overlay position based on the new object.