US 12,118,140 B2
Methods and systems for provisioning a virtual experience based on user preference
Sunder Jagannathan, Coimbatore (IN); Vivek Agarwal, Faridabad (IN); Hitesh Singla, Gurgaon (IN); and Sushant Kumar, Jamshedpur (IN)
Assigned to SY Interiors Pvt. Ltd., (IN)
Filed by Sunder Jagannathan, Coimbatore (IN); Vivek Agarwal, Faridabad (IN); Hitesh Singla, Gurgaon (IN); and Sushant Kumar, Jamshedpur (IN)
Filed on Jan. 13, 2023, as Appl. No. 18/096,576.
Application 18/096,576 is a continuation of application No. 17/359,624, filed on Jun. 27, 2021, granted, now 11,610,365.
Prior Publication US 2024/0241578 A1, Jul. 18, 2024
This patent is subject to a terminal disclaimer.
Int. Cl. G06F 3/01 (2006.01); G06Q 50/00 (2024.01); G06T 13/40 (2011.01); G06T 19/00 (2011.01); G06V 40/16 (2022.01); G06V 40/19 (2022.01)
CPC G06F 3/013 (2013.01) [G06Q 50/01 (2013.01); G06T 13/40 (2013.01); G06T 19/00 (2013.01); G06V 40/172 (2022.01); G06V 40/19 (2022.01)] 18 Claims
OG exemplary drawing
 
1. A method of provisioning a virtual experience based on user preference, the method comprising:
receiving, using a processing device, an identity data associated with an identity of a user;
retrieving, using a storage device, a user profile data based on the identity data, wherein the retrieving of the user profile data comprises:
transmitting, using the communication device, the identity data to at least one social media server;
receiving, using the communication device, at least one social media network data and social media post data associated with the identity data from the at least one social media server; and
extracting, using the processing device, at least one facial data corresponding to at least one family member associated with the identity data;
analyzing, using the processing device, the user profile data using a machine learning model, wherein the analyzing comprises performing face recognition;
determining, using the processing device, at least one preference data based on the analyzing;
generating, using the processing device, an interactive 3D model data, wherein the generating of the interactive 3D model data comprises generating at least one avatar based on the at least one facial data and animating the at least one avatar in relation to at least one virtual utility object;
transmitting, using a communication device, the interactive 3D model data to a user device configured to present the interactive 3D model data; and
receiving, using the communication device, a reaction data from the user device, wherein the user device comprises at least one sensor configured to generate the reaction data based on a behavioral reaction of a user consuming the interactive 3D model data, wherein the generating comprises updating the interactive 3D model data based on the reaction data.