US 12,033,080 B2
Sparse recovery autoencoder
Xinnan Yu, New York, NY (US); Shanshan Wu, Austin, TX (US); Daniel Holtmann-Rice, Albany, CA (US); Dmitry Storcheus, New York, NY (US); Sanjiv Kumar, Jericho, NY (US); and Afshin Rostamizadeh, New York, NY (US)
Assigned to GOOGLE LLC, Mountain View, CA (US)
Filed by GOOGLE LLC, Mountain View, CA (US)
Filed on Jun. 14, 2019, as Appl. No. 16/442,203.
Claims priority of provisional application 62/685,418, filed on Jun. 15, 2018.
Prior Publication US 2019/0385063 A1, Dec. 19, 2019
Int. Cl. G06N 3/084 (2023.01); G06F 17/16 (2006.01); G06N 3/02 (2006.01); G06N 3/045 (2023.01)
CPC G06N 3/084 (2013.01) [G06F 17/16 (2013.01); G06N 3/02 (2013.01); G06N 3/045 (2023.01)] 20 Claims
OG exemplary drawing
 
1. A computer-implemented method comprising:
receiving, using at least one processor of a computing device, a dataset of sparse vectors from a requesting process, the sparse vectors have a dimension of d;
initializing an encoding matrix stored in a memory of the computing device;
selecting a subset of sparse vectors from the dataset;
modifying, using the at least one processor, the encoding matrix via machine learning to minimize reconstruction error for the subset of sparse vectors by:
generating an encoded vector of dimension k for each vector in the subset using the encoding matrix, where k<d,
decoding each of the encoded vectors using S projected subgradient steps, where S is a predetermined number that is lower than a number of steps used for convergence, and
using back propagation to adjust the encoding matrix;
generating an encoded dataset by encoding each vector in the dataset of sparse vectors using the encoding matrix; and
providing the encoded dataset to the requesting process.