| CPC G06T 19/006 (2013.01) [G06Q 30/0204 (2013.01); G06Q 30/0269 (2013.01); G06T 19/003 (2013.01); H04W 4/38 (2018.02)] | 23 Claims |

|
1. A method, comprising:
recording a first lux value in response to determining that application of a camera-assisted virtual experience method provides unstable results when said first lux value is sensed by a light sensor,
assessing the camera-assisted virtual experience method by determining whether a light sensor value received from the light sensor matches any previously recorded lux value of a set of previously recorded lux values, which includes the first lux value,
if the light sensor value does not match any previously recorded lux value of the set of previously recorded lux values, generating, by a computing system applying the camera-assisted virtual experience method, a virtual experience on a device, wherein the camera-assisted virtual experience method detects feature points to track in camera frames, and
if the light sensor value matches any previously recorded lux value of the set of previously recorded lux values, refraining from applying the camera-assisted virtual experience to generate the virtual experience and generating, by the computing system applying a motion sensor-assisted virtual experience method, the virtual experience on the device,
wherein the generating of the virtual experience is based on predetermined settings and wherein the virtual experience comprises digital elements,
wherein the generating of the virtual experience comprises generating a real world image and virtual objects, and
wherein the digital elements comprise the real world image and the virtual objects;
outputting, by the computing system, the real world image for display on a display screen;
outputting, by the computing system, the virtual objects to be displayed on the display screen, wherein the virtual objects overlap the real world image to provide the virtual experience;
determining, by the computing system, an initial location of a user device to correctly place the virtual experience based on information sensed by a plurality of sensors that are in a real world location that is proximate to a location recorded in the real world image;
determining, by the computing system, movement metrics of said user device, based on information sensed by the plurality of sensors, wherein movement metrics comprise at least one of: accelerometer readings, gyroscope readings, GPS coordinates, location and positional change appropriated from readings from IoT devices or stationary wireless devices; and
continuing, by the computing system, to output the movement of the digital elements and said user device in the virtual experience in accordance with the movement metrics.
|