US 11,755,172 B2
Systems and methods of generating consciousness affects using one or more non-biological inputs
James Mercs, Huntington Beach, CA (US)
Assigned to TWIIN, INC., San Luis Obispo, CA (US)
Appl. No. 16/334,747
Filed by TWIIN, INC., San Luis Obispo, CA (US)
PCT Filed Sep. 20, 2017, PCT No. PCT/US2017/052362
§ 371(c)(1), (2) Date Mar. 20, 2019,
PCT Pub. No. WO2018/057544, PCT Pub. Date Mar. 29, 2018.
Claims priority of provisional application 62/396,823, filed on Sep. 20, 2016.
Prior Publication US 2020/0004404 A1, Jan. 2, 2020
Int. Cl. G06F 3/04817 (2022.01); H04W 4/21 (2018.01); G06F 3/0482 (2013.01); H04L 51/046 (2022.01); A61B 5/16 (2006.01); A61B 5/00 (2006.01); H04L 51/52 (2022.01); H04L 67/75 (2022.01); H04L 67/50 (2022.01)
CPC G06F 3/04817 (2013.01) [A61B 5/165 (2013.01); A61B 5/7264 (2013.01); G06F 3/0482 (2013.01); H04L 51/046 (2013.01); H04L 51/52 (2022.05); H04L 67/535 (2022.05); H04L 67/75 (2022.05); H04W 4/21 (2018.02); G06F 2203/011 (2013.01)] 11 Claims
OG exemplary drawing
 
1. A method of generating a visual consciousness affect representation, said method comprising:
receiving, from memory of a client device and/or a server, one or more shares originating from on or more users and posted on a website and/or a client device application presented on one or more client devices, each of said shares contains one or more submissions;
receiving, from said memory of a said client device and/or said server, a non-biological input not originating from one or more of said users and said non-biological input originating from a device or a module;
calculating, using a server and/or said client device and based on one or more of said shares and said non-biological input, a dominant category of one or more of said shares and said non-biological input, said calculating comprising:
identifying, in each of said submissions and said non-biological input, information relating to one or more consciousness input types;
extracting, from said information relating to one or more of said consciousness input types, information relating to one or more categories of each of said consciousness input types (“categories”) to generate a list identifying one or more extracted categories from each of said submissions and said non-biological input, and wherein each of said extracted categories is assigned a predetermined value;
assigning, based on an age of each of said submissions and said non-biological input, an aging index value to each of said extracted categories on said list;
determining, for each said extracted categories from said list, a category contribution value, which equals a product of said predetermined value assigned to each of said extracted categories and said aging index value assigned to each of said extracted categories;
adding each category contribution value to arrive at a total contribution value for each said extracted category from said list; and
classifying a highest total contribution value of any of said extracted categories from said list as the dominant category of said consciousness input and said non-biological input;
determining, using said client module on said client device and/or said server module on said server and based on one or more of said shares and said non-biological input, an intensity of said dominant category of one or more of said shares and said non-biological input;
storing, in memory of said server and/or said client device, said dominant category of one or more of said shares and said non-biological input and said intensity of said dominant category;
conveying, using said client module and/or said server module, said dominant category of one or more of said shares and said intensity of said dominant category from said client device and/or said server to said website and/or said client device application presented on a plurality said client devices; and
visually presenting, on said display interface of said plurality of client devices, one or more of said shares and said visual consciousness affect representation corresponding to one or more of said shares, wherein said consciousness affect representation appears adjacent to one or more of said shares, wherein said consciousness affect representation is based on said dominant category of one or more of said shares posted on said website and/or said client device application and said non-biological input, wherein said visual consciousness affect is chosen from a group comprising color, weather pattern, image, and animation, and wherein said visual consciousness affect representation is of a predetermined size, such that said predetermined size depends upon said calculated value obtained from said determining said intensity of said dominant category.