US 12,333,426 B2
Method and apparatus for creating dialogue, and storage medium
Zhenyu Jiao, Beijing (CN); Lei Han, Beijing (CN); Hongjie Guo, Beijing (CN); Shuqi Sun, Beijing (CN); Tingting Li, Beijing (CN); and Ke Sun, Beijing (CN)
Assigned to BEIJING BAIDU NETCOM SCIENCE AND TECHNOLOGY CO., LTD., Beijing (CN)
Filed by BEIJING BAIDU NETCOM SCIENCE AND TECHNOLOGY CO., LTD., Beijing (CN)
Filed on Apr. 29, 2021, as Appl. No. 17/302,266.
Claims priority of application No. 202010991996.2 (CN), filed on Sep. 21, 2020.
Prior Publication US 2021/0248471 A1, Aug. 12, 2021
Int. Cl. G06F 16/35 (2025.01); G06N 3/045 (2023.01); G06N 3/08 (2023.01)
CPC G06N 3/08 (2013.01) [G06F 16/35 (2019.01); G06N 3/045 (2023.01)] 15 Claims
OG exemplary drawing
 
1. A method for generating a dialogue, performed by an electronic device, comprising:
obtaining problem information;
inputting the problem information to a small sample learning model to generate a first feature, wherein the first feature comprises a problem feature and a support set feature, the support set feature comprises a feature of a support set of the small sample learning model, and the support set comprises training data in the small sample learning model;
inputting the problem information to a deep learning (DL) model to generate a second feature, wherein the second feature comprises a low-order feature and a high-order feature, the low-order feature is adjacent to an input layer of the DL model, and the high-order feature is adjacent to a final output layer of the DI model;
combining the first feature and the second feature to generate a feature sequence;
inputting the feature sequence to a fusion model to generate dialogue information corresponding to the problem information; and
providing recommendation information for a user based on the dialogue information;
wherein combining the first feature and the second feature to generate the feature sequence comprises:
combining the first feature and the second feature in different layers by using a plurality of preset fusion operators to generate the feature sequence, wherein the feature operators comprise a splice operator, an inner product operator and a bilinear feature crossed product;
wherein inputting the problem information to the small sample learning model to generate the first feature comprises:
performing a feature extraction on the problem information by the small sample learning model to generate the problem feature; and
obtaining a support set corresponding to the problem information by the small sample learning model according to the problem feature, and obtaining the support set feature of the support set corresponding to the problem information,
wherein, obtaining the support set corresponding to the problem information by the small sample learning model according to the problem feature, comprises:
obtaining a plurality of candidate support sets by the small sample learning model, and obtaining an intention feature of each candidate support set;
generating a direct score of the problem feature relative to the plurality of the candidate support sets by the small sample learning model according to the problem feature and the intention feature of each candidate support set; and
selecting a support set corresponding to the problem information from the plurality of the candidates support sets according to the direct score of the problem feature relative to the plurality of candidate support sets.