US 11,914,787 B2
Method for dynamic interaction and electronic device thereof
Kachana Raghunatha Reddy, Bangalore (IN); Vanraj Vala, Bangalore (IN); Barath Raj Kandur Raja, Bangalore (IN); Mohamed Akram Ulla Shariff, Bangalore (IN); Parameswaranath Vadackupurath Mani, Vandiperiyar (IN); Beda Prakash Meher, Sundargarh (IN); Mahender Rampelli, Hanmakonda (IN); Namitha Poojary, Bangalore (IN); Sujay Srinivasa Murthy, Bengaluru (IN); Amit Arvind Mankikar, Bangalore (IN); Balabhaskar Veerannagari, Bangalore (IN); Sreevatsa Dwaraka Bhamidipati, Bangalore (IN); and Sanjay Ghosh, Bangalore (IN)
Assigned to Samsung Electronics Co., Ltd., Suwon-si (KR)
Filed by Samsung Electronics Co., Ltd., Suwon-si (KR)
Filed on Dec. 27, 2021, as Appl. No. 17/646,096.
Application 17/646,096 is a continuation of application No. 16/134,873, filed on Sep. 18, 2018, granted, now 11,209,907.
Claims priority of application No. 201741033023 (IN), filed on Sep. 18, 2017; and application No. 201741033023 (IN), filed on Sep. 6, 2018.
Prior Publication US 2022/0147153 A1, May 12, 2022
This patent is subject to a terminal disclaimer.
Int. Cl. G06F 3/01 (2006.01); G06F 3/16 (2006.01); G06V 40/20 (2022.01)
CPC G06F 3/017 (2013.01) [G06F 3/167 (2013.01); G06V 40/20 (2022.01); G06F 2203/011 (2013.01)] 12 Claims
OG exemplary drawing
 
1. A method for operating an electronic device, the method comprising:
detecting at least one gestural input from a user, wherein the at least one gestural input is detected at a specific region among at least one pre-defined region on the electronic device while a first action is being performed by the electronic device;
determining an emotional state of the user corresponding to the at least one gestural input and the specific region, based on an emotional model comprising a mapping relationship between at least one type of the at least one gestural input, the at least one pre-defined region on the electronic device, and at least one emotional state;
generating one or more contextual parameters based on the determined emotional state;
performing a second action related to the first action based on the one or more contextual parameters;
detecting another at least one gestural input, in response to the performed second action; and
updating the emotional model based on the another at least one gestural input.