US 12,346,786 B2
Data-efficient reinforcement learning for continuous control tasks
Martin Riedmiller, Balgheim (DE); Roland Hafner, Balgheim (DE); Mel Vecerik, London (GB); Timothy Paul Lillicrap, London (GB); Thomas Lampe, London (GB); Ivaylo Popov, Ruse (BG); Gabriel Barth-Maron, London (GB); and Nicolas Manfred Otto Heess, London (GB)
Assigned to DeepMind Technologies Limited, London (GB)
Filed by DeepMind Technologies Limited, London (GB)
Filed on Jul. 12, 2023, as Appl. No. 18/351,440.
Application 18/351,440 is a continuation of application No. 16/882,373, filed on May 22, 2020, granted, now 11,741,334.
Application 16/882,373 is a continuation of application No. 16/528,260, filed on Jul. 31, 2019, granted, now 10,664,725, issued on May 26, 2020.
Application 16/528,260 is a continuation of application No. PCT/IB2018/000051, filed on Jan. 31, 2018.
Claims priority of provisional application 62/452,930, filed on Jan. 31, 2017.
Prior Publication US 2024/0062035 A1, Feb. 22, 2024
This patent is subject to a terminal disclaimer.
Int. Cl. G06N 3/00 (2023.01); G06F 18/21 (2023.01); G06F 18/214 (2023.01); G06N 3/006 (2023.01); G06N 3/045 (2023.01); G06N 3/08 (2023.01); G06N 3/088 (2023.01)
CPC G06N 3/006 (2013.01) [G06F 18/2148 (2023.01); G06F 18/2185 (2023.01); G06N 3/045 (2023.01); G06N 3/08 (2013.01); G06N 3/088 (2013.01)] 11 Claims
OG exemplary drawing
 
1. A system for training an actor neural network used to select actions to be performed by an agent that interacts with an environment by receiving observations characterizing states of the environment and, in response to each observation, performing an action selected from a continuous space of possible actions, wherein the actor neural network maps observations to next actions in accordance with values of parameters of the actor neural network, and wherein the system comprises:
a plurality of workers, each of the plurality of workers having access to a shared memory configured to store current parameters of the actor neural network,
wherein each worker is configured to operate independently of each other worker,
wherein each worker is configured to communicate with and provide instructions to a respective agent replica that interacts with a respective replica of the environment during the training of the actor neural network,
wherein each worker is configured to repeatedly perform (i) a set of updating operations comprising determining that a threshold number of writes to the shared memory have occurred since a preceding update to the values of the parameters of the actor neural network, and in response to the determination, updating current values of the parameters of the actor neural network in the shared memory, and (ii) a set of acting operations to control the respective agent replica using the actor neural network to perform an action in the respective replica of the environment in order to generate training data for training the actor neural network,
wherein the set of the acting operations comprises:
receiving a current observation characterizing a current state of the environment replica interacted with by the agent replica associated with the worker,
selecting a current action to be performed by the agent replica associated with the worker in response to the current observation using the actor neural network and in accordance with the current values of the parameters,
identifying an actual reward resulting from the agent replica performing the current action in response to the current observation, and
receiving a next observation characterizing a next state of the environment replica interacted with by the agent replica, wherein the environment replica transitioned into the next state from the current state in response to the agent replica performing the current action, and
wherein each worker performs a plurality of iterations of the set of updating operations after selecting the current action and prior to selecting a new action in response to the next observation.