US 12,236,339 B2
Control input scheme for machine learning in motion control and physics based animation
Michael Taylor, San Mateo, CA (US); and Sergey Bashkirov, Novato, CA (US)
Assigned to Sony Interactive Entertainment Inc., Tokyo (JP)
Filed by Sony Interactive Entertainment Inc., Tokyo (JP)
Filed on Nov. 22, 2019, as Appl. No. 16/693,093.
Prior Publication US 2021/0158141 A1, May 27, 2021
Int. Cl. G06N 3/08 (2023.01); G05B 13/02 (2006.01); G06F 30/27 (2020.01); G06N 3/02 (2006.01); G06T 13/00 (2011.01); G06T 13/40 (2011.01); A63F 13/57 (2014.01); B25J 9/16 (2006.01); G06N 3/006 (2023.01)
CPC G06N 3/08 (2013.01) [G05B 13/027 (2013.01); A63F 13/57 (2014.09); B25J 9/1671 (2013.01); G06F 30/27 (2020.01); G06N 3/006 (2013.01); G06N 3/02 (2013.01); G06T 13/00 (2013.01); G06T 13/40 (2013.01)] 33 Claims
OG exemplary drawing
 
1. A method for control input, comprising:
a) taking an integral of an output value from a trained Motion Decision Neural Network for one or more movable joints to generate an integrated output value;
b) generating a subsequent output value using a machine learning algorithm that includes each of a sensor value, the integrated output value, and one or more of visual information, sound information, motion information, as inputs to the trained Motion Decision Neural Network, wherein the sensor value is generated by one or more sensors and the one or more of visual information, sound information, motion information is generated by one or more other sensors, wherein the one or more sensors is a different type of sensor than the one or more other sensors; and
c) imparting movement with the one or more moveable joints according to an integral of the subsequent output value.