US 12,321,973 B2
Systems and methods for user platform based recommendations
Jeremy Huang, Plano, TX (US); Michelle Emamdie, Saint Augustine, FL (US); Derek Bumpas, Allen, TX (US); Jiaxin Guo, Plano, TX (US); and Qiaochu Tang, Frisco, TX (US)
Assigned to Capital One Services, LLC, McLean, VA (US)
Filed by Capital One Services, LLC, McLean, VA (US)
Filed on May 29, 2024, as Appl. No. 18/676,962.
Application 18/676,962 is a continuation of application No. 17/213,395, filed on Mar. 26, 2021, granted, now 12,008,623.
Prior Publication US 2024/0320727 A1, Sep. 26, 2024
This patent is subject to a terminal disclaimer.
Int. Cl. G06Q 30/00 (2023.01); G06F 16/9535 (2019.01); G06F 16/9536 (2019.01); G06Q 30/0601 (2023.01); G06Q 30/0645 (2023.01)
CPC G06Q 30/0629 (2013.01) [G06F 16/9535 (2019.01); G06F 16/9536 (2019.01); G06Q 30/0627 (2013.01); G06Q 30/0641 (2013.01); G06Q 30/0645 (2013.01)] 20 Claims
OG exemplary drawing
 
1. A computer-implemented method for determining user attributes, the method comprising:
accessing one or more user platforms;
identifying user-related content linked to the user via the one or more user platforms; and
extracting one or more user attributes based on the user-related content by:
receiving user images associated with the user-related content;
determining one or more image attributes of the user images, the one or more image attributes including content of the user images determined by performing image recognition on the user images;
determining context associated with the user images;
applying the one or more image attributes of the user images and the context associated with the user images to a machine-learning model, the machine-learning model trained to identify user attributes based on both the one or more image attributes of the user images and the context associated with the user images to output the user attributes;
receiving, from the machine-learning model, the outputted one or more user attributes; and
storing the outputted one or more user attributes in association with the user for further processing.