US 12,029,573 B2
System and method for associating music with brain-state data
Ariel Stephanie Garten, Toronto (CA); Christopher Allen Aimone, Toronto (CA); Trevor Coleman, Toronto (CA); Kapil Jay Mishra Vidyarthi, Toronto (CA); Locillo (Lou) Giuseppe Pino, Cambridge (CA); Michael Apollo Chabior, Oakville (CA); Paul Harrison Baranowski, Toronto (CA); Raul Rajiv Rupsingh, Brampton (CA); Madeline Ashby, Brampton (CA); Paul V. Tadich, Toronto (CA); Graeme Daniel Moffat, Toronto (CA); and Javier Arturo Moreno Camargo, Toronto (CA)
Assigned to INTERAXON INC., Toronto (CA)
Filed by INTERAXON INC, Toronto (CA)
Filed on Jun. 25, 2019, as Appl. No. 16/451,982.
Application 16/451,982 is a continuation in part of application No. 16/394,563, filed on Apr. 25, 2019, granted, now 11,786,163.
Application 16/394,563 is a continuation of application No. 14/693,480, filed on Apr. 22, 2015, granted, now 10,321,842, issued on Jun. 18, 2019.
Claims priority of provisional application 61/982,631, filed on Apr. 22, 2014.
Prior Publication US 2019/0387998 A1, Dec. 26, 2019
Int. Cl. A61B 5/38 (2021.01); A61B 5/00 (2006.01); A61B 5/16 (2006.01); A61B 5/316 (2021.01); A61B 5/375 (2021.01); A61M 21/00 (2006.01)
CPC A61B 5/38 (2021.01) [A61B 5/165 (2013.01); A61B 5/316 (2021.01); A61B 5/375 (2021.01); A61B 5/486 (2013.01); A61B 5/742 (2013.01); A61M 21/00 (2013.01); A61M 2021/0027 (2013.01); A61M 2205/3375 (2013.01); A61M 2205/3561 (2013.01); A61M 2205/3584 (2013.01); A61M 2205/502 (2013.01); A61M 2205/52 (2013.01); A61M 2230/06 (2013.01); A61M 2230/10 (2013.01); A61M 2230/42 (2013.01); A61M 2230/60 (2013.01); A61M 2230/65 (2013.01)] 11 Claims
OG exemplary drawing
 
1. A computer-implemented method for sharing a user experience, comprising:
receiving bio-signal data for a plurality of users from a plurality of bio-signal sensors;
receiving sound data representing sound experienced by each of the users;
determining a physiological state of each of the users based at least in part on the bio-signal data of that user;
determining an environmental state associated with each of the users based at least in part on the sound data;
using a rules engine, determining if a condition is met, based at least in part on the physiological state of each of the users and the environmental state associated with each user;
upon the condition being met, executing an associated action including generating a sensory signal; and
using transducers, outputting a sensory output to each of the users based on the sensory signal.