US 12,205,028 B2
Co-disentagled series/text multi-modal representation learning for controllable generation
Yuncong Chen, Plainsboro, NJ (US); Zhengzhang Chen, Princeton Junction, NJ (US); Xuchao Zhang, Elkridge, MD (US); Wenchao Yu, Plainsboro, NJ (US); Haifeng Chen, West Windsor, NJ (US); LuAn Tang, Pennington, NJ (US); and Zexue He, La Jolla, CA (US)
Assigned to NEC Corporation, Tokyo (JP)
Filed by NEC Laboratories America, Inc., Princeton, NJ (US)
Filed on Oct. 3, 2022, as Appl. No. 17/958,597.
Claims priority of provisional application 63/253,169, filed on Oct. 7, 2021.
Claims priority of provisional application 63/308,081, filed on Feb. 9, 2022.
Prior Publication US 2023/0109729 A1, Apr. 13, 2023
Int. Cl. G06F 40/30 (2020.01); G06F 40/47 (2020.01); G06N 3/08 (2023.01)
CPC G06N 3/08 (2013.01) [G06F 40/30 (2020.01); G06F 40/47 (2020.01)] 20 Claims
OG exemplary drawing
 
1. A computer-implemented method for multi-model representation learning, comprising:
encoding, by a trained time series (TS) encoder, an input TS segment into a TS-shared latent representation and a TS-private latent representation; and
generating, by a trained text generator, a natural language text that explains the input TS segment, responsive to the TS-shared latent representation, the TS-private latent representation, and a text-private latent representation.