US 12,112,252 B1
Enhanced brand matching using multi-layer machine learning
Xianshun Chen, Seattle, WA (US); and Archiman Dutta, Shoreline, WA (US)
Assigned to Amazon Technologies, Inc., Seattle, WA (US)
Filed by Amazon Technologies, Inc., Seattle, WA (US)
Filed on May 21, 2021, as Appl. No. 17/327,422.
Int. Cl. G06N 3/045 (2023.01); G06F 18/22 (2023.01); G06V 30/19 (2022.01)
CPC G06N 3/045 (2023.01) [G06F 18/22 (2023.01); G06V 30/19013 (2022.01)] 20 Claims
OG exemplary drawing
 
1. A method comprising:
generating, based on a first universal image embedding and a second universal image embedding as inputs to a twin neural network, a first image vector associated with a first brand and a second image vector associated with a second brand, the first universal image embedding associated with the first brand and the second universal image embedding associated with the second brand;
generating, based on a first universal text embedding and a second universal text embedding as inputs to the twin neural network, a first text vector associated with the first brand and a second text vector associated with the second brand, the first universal text embedding associated with the first brand and the second universal text embedding associated with the second brand;
generating, based on a first universal name vector of a first product and a second universal name vector of a second product as inputs to the twin neural network, a third text vector associated with the first brand and a fourth text vector associated with the second brand, the first universal name vector associated with the first brand and the second universal name vector associated with the second brand;
generating, based on the first universal image embedding and the second universal image embedding as inputs to a difference neural network, a first difference vector indicative of a difference between the first universal image embedding and the second universal image embedding;
generating, based on the first universal text embedding and the second universal text embedding as inputs to the difference neural network, a second difference vector indicative of a difference between the first universal text embedding and the second universal text embedding;
generating, based on the first universal name vector and the second universal name vector as inputs to the difference neural network, a third difference vector indicative of a difference between the first universal name vector and the second universal name vector;
generating a concatenated vector by concatenating the first image vector to the second image vector, the first text vector, the second text vector, the first name vector, the second name vector, the first difference vector, the second difference vector, and the third difference vector;
generating, based on the concatenated vector as an input to a feedforward neural network (FFN), a score between zero and one, the score indicative of a relationship between the first brand and the second brand.