US 12,321,532 B2
System and method for an end-device modulation based on a hybrid trigger
Matthew Ryan Gilg, Billings, MT (US); and Timothy Chia-Chieh Sun, Atherton, CA (US)
Filed by Whirlwind VR, Inc, Burlingame, CA (US)
Filed on Nov. 19, 2020, as Appl. No. 16/953,069.
Application 16/953,069 is a continuation in part of application No. 16/705,846, filed on Dec. 6, 2019, granted, now 11,452,187.
Application 16/705,846 is a continuation in part of application No. 16/522,245, filed on Jul. 25, 2019, granted, now 11,294,468.
Application 16/522,245 is a continuation in part of application No. 16/387,236, filed on Apr. 17, 2019, granted, now 11,023,048.
Application 16/387,236 is a continuation in part of application No. 16/196,254, filed on Nov. 20, 2018, granted, now 10,768,704, issued on Sep. 8, 2020.
Prior Publication US 2021/0096656 A1, Apr. 1, 2021
This patent is subject to a terminal disclaimer.
Int. Cl. G06F 3/02 (2006.01); A63F 13/285 (2014.01); G06F 3/01 (2006.01); G06F 13/10 (2006.01); G08B 7/06 (2006.01); A63F 13/98 (2014.01)
CPC G06F 3/0219 (2013.01) [A63F 13/285 (2014.09); G06F 3/016 (2013.01); G06F 13/102 (2013.01); G08B 7/06 (2013.01); A63F 13/98 (2014.09)] 25 Claims
OG exemplary drawing
 
1. A system for end-device modulation, said system comprising:
at least one end-device (E-D) in communication with at least a first device (D1) outputting audio/video (a/v) programming;
a processor;
a memory element coupled to the processor;
a program executable by the processor to:
provide an interface configured for end-user script input loaded onto a web-browser page, wherein the end-user script input is any one of a basic programming language, including HTML, HTML5, or Javascript (JS) for adjusting of an aspect of the immersive light effect from the at least one E-D during real-time play of a graphical content;
render the the inputted script to an off-screen buffer and then visualize the inputted script as at least a two-dimensional effects digital canvas;
position a virtual representation of the E-D on the digital canvas displayed on a D1-coupled display representing a user's physical and virtual space;
apply a geo-positional transform and scaling of the virtual E-D within the digital canvas; and
control a light effect emitted from the at least one E-D corresponding to a captured region of the transformed/scaled digital canvas based on the scripted input and a combination of different triggers recognizing at least one of an a/v element or a/v event from the a/v program, wherein the different triggers are at least one of active-play end-user scripting, computer vision score, audio score, or Optical Character Recognition.