US 12,456,187 B2
Pretraining framework for neural networks
Xiaosong Wang, Rockville, MD (US); Ziyue Xu, Reston, VA (US); Lickkong Tam, Santa Clara, CA (US); Dong Yang, Pocatello, ID (US); and Daguang Xu, Potomac, MD (US)
Assigned to NVIDIA Corporation, Santa Clara, CA (US)
Filed by NVIDIA Corporation, Santa Clara, CA (US)
Filed on Jun. 30, 2021, as Appl. No. 17/364,341.
Prior Publication US 2023/0019211 A1, Jan. 19, 2023
Int. Cl. G06N 3/0455 (2023.01); G06F 40/30 (2020.01); G06N 3/045 (2023.01); G06N 3/0464 (2023.01); G06N 3/048 (2023.01); G06N 3/08 (2023.01); G06N 3/084 (2023.01); G06N 3/0895 (2023.01); G06N 3/09 (2023.01); G06T 3/4046 (2024.01); G06T 7/00 (2017.01); G06V 10/80 (2022.01); G06V 10/82 (2022.01)
CPC G06T 7/0012 (2013.01) [G06N 3/048 (2023.01); G06T 3/4046 (2013.01)] 33 Claims
OG exemplary drawing
 
1. One or more processors, comprising:
circuitry to train one or more neural networks to perform inference on one or more images based, at least in part, on an extent to which training text corresponds to one or more training images, the extent being determined by the one or more neural networks using inputs of paired text and image data and unpaired text and image data.