US 12,481,884 B2
Reinforcement learning-based techniques for training a natural media agent
Jonathan Brandt, Santa Cruz, CA (US); Chen Fang, Sunnyvale, CA (US); Byungmoon Kim, Sunnyvale, CA (US); and Biao Jia, College Park, MD (US)
Assigned to Adobe Inc., San Jose, CA (US)
Filed by Adobe Inc., San Jose, CA (US)
Filed on Oct. 2, 2023, as Appl. No. 18/479,486.
Application 18/479,486 is a division of application No. 16/549,072, filed on Aug. 23, 2019, granted, now 11,775,817.
Prior Publication US 2024/0037398 A1, Feb. 1, 2024
Int. Cl. G06N 3/08 (2023.01); G06N 3/04 (2023.01); G09G 5/37 (2006.01)
CPC G06N 3/08 (2013.01) [G06N 3/04 (2013.01); G09G 5/37 (2013.01)] 20 Claims
OG exemplary drawing
 
8. A method comprising:
generating, based at least on processing a representation of a current working observation of a canvas in a synthetic rendering environment using a natural media agent comprising one or more deep neural networks, a representation of at least one primitive graphic action;
generating an updated state of the canvas in the synthetic rendering environment based at least on the at least one primitive graphic action;
updating an accumulated reward, accumulated over a plurality of iterations of the natural media agent, based on a difference between at least a portion of the updated state of the canvas and a current training image of a set of training images; and
updating, in response to detecting a trigger, the one or more deep neural networks using the accumulated reward, wherein the trigger comprises at least one of a designated number of iterations of the natural media agent, a number of iterations of the natural media agent that increases between episodes of iterations during which the accumulated reward is updated, or a determination that a position of a media rendering instrument in the updated state of the canvas moved more than a threshold distance from a center of an ego-centric patch of the canvas.