US 11,853,677 B2
Generating integrated circuit placements using neural networks
Anna Darling Goldie, San Francisco, CA (US); Azalia Mirhoseini, Mountain View, CA (US); Ebrahim Songhori, San Jose, CA (US); Wenjie Jiang, Mountain View, CA (US); Shen Wang, Sunnyvale, CA (US); Roger David Carpenter, San Francisco, CA (US); Young-Joon Lee, San Jose, CA (US); Mustafa Nazim Yazgan, Cupertino, CA (US); Chian-min Richard Ho, Palo Alto, CA (US); Quoc V. Le, Sunnyvale, CA (US); James Laudon, Madison, WI (US); Jeffrey Adgate Dean, Palo Alto, CA (US); Kavya Srinivasa Setty, Sunnyvale, CA (US); and Omkar Pathak, Mountain View, CA (US)
Assigned to Google LLC, Mountain View, CA (US)
Filed by Google LLC, Mountain View, CA (US)
Filed on Dec. 15, 2022, as Appl. No. 18/082,392.
Application 17/555,085 is a division of application No. 17/238,128, filed on Apr. 22, 2021, granted, now 11,216,609, issued on Jan. 4, 2022.
Application 18/082,392 is a continuation of application No. 17/555,085, filed on Dec. 17, 2021, granted, now 11,556,690.
Claims priority of provisional application 63/014,021, filed on Apr. 22, 2020.
Prior Publication US 2023/0117786 A1, Apr. 20, 2023
This patent is subject to a terminal disclaimer.
Int. Cl. G06F 30/392 (2020.01); G06F 30/398 (2020.01); G06N 3/08 (2023.01)
CPC G06F 30/392 (2020.01) [G06F 30/398 (2020.01); G06N 3/08 (2013.01)] 20 Claims
OG exemplary drawing
 
1. A method of training a node placement neural network that comprises:
an encoder neural network that is configured to, at each of a plurality of time steps, receive an input representation comprising data representing a current state of a placement of a netlist of nodes on a surface of an integrated circuit chip as of the time step and process the input representation to generate an encoder output, and
a policy neural network configured to, at each of the plurality of time steps, receive an encoded representation generated from the encoder output generated by the encoder neural network and process the encoded representation to generate a score distribution over a plurality of positions on the surface of the integrated circuit chip, the method comprising:
generating a reinforcement learning training example, comprising:
obtaining training netlist data specifying a training netlist of nodes;
generating a training placement of the training netlist of nodes using the node placement neural network, and
determining a value of a reward function that measures a quality of the training placement of the training netlist of nodes, wherein the reward function comprises a plurality of terms that each measure a respective characteristic of the training placement; and
training the policy neural network on the reinforcement learning training example through reinforcement learning.