US 12,217,033 B2
Systems and methods for code understanding and generation
Yue Wang, Singapore (SG); Weishi Wang, Singapore (SG); Shafiq Rayhan Joty, Singapore (SG); and Chu Hong Hoi, Singapore (SG)
Assigned to Salesforce, Inc., San Francisco, CA (US)
Filed by Salesforce, Inc., San Francisco, CA (US)
Filed on Sep. 26, 2023, as Appl. No. 18/475,103.
Application 18/475,103 is a continuation of application No. 17/459,968, filed on Aug. 27, 2021, granted, now 11,782,686.
Claims priority of provisional application 63/189,857, filed on May 18, 2021.
Prior Publication US 2024/0020102 A1, Jan. 18, 2024
This patent is subject to a terminal disclaimer.
Int. Cl. G06F 9/44 (2018.01); G06F 8/41 (2018.01); G06F 18/214 (2023.01); G06F 40/20 (2020.01); G06N 3/047 (2023.01); G06N 3/084 (2023.01)
CPC G06F 8/427 (2013.01) [G06F 18/214 (2023.01); G06F 40/20 (2020.01); G06N 3/047 (2023.01); G06N 3/084 (2013.01)] 18 Claims
OG exemplary drawing
 
1. A method for programming language (PL) generation and understanding using an encoder-decoder model, the method comprising:
generating, by a neural network model, a first predicted output in response to a bimodal input sequence comprising a PL segment and a natural language (NL) segment according to a first pre-training task, wherein the first predicted output is generated by:
masking a plurality of identifiers in the PL segment of the bimodal input sequence to generate a masked bimodal input sequence, wherein a designated mask token is used for a specific identifier of the plurality of identifiers,
encoding, by an encoder of the neural network model, the masked bimodal input sequence into a first representation, and
generating, by a decoder of the neural network model, a target sequence comprising the masked plurality of identifiers and corresponding designated mask tokens, from the first representation to generate the first predicted output
wherein the PL segment and the NL segment belong to an unlabeled code sample;
computing a first training objective based on the first predicted output according to the first pre-training task;
generating, by the neural network model, a second predicted output in response to the bimodal input sequence according to a second pre-training task;
computing a second training objective based on the second predicted output according to the second pre-training task; and
alternately or jointly updating the neural network model based on the first training objective and the second training objective.