US 11,868,914 B2
Moderation of user-generated content
Ashutosh Kulshreshtha, Sunnyvale, CA (US); Luca de Alfaro, Mountain View, CA (US); Mitchell Slep, San Francisco, CA (US); Nicu Daniel Cornea, Santa Clara, CA (US); Sowmya Subramanian, San Francisco, CA (US); and Ethan G. Russell, Jersey City, NJ (US)
Assigned to GOOGLE LLC, Mountain View, CA (US)
Filed by Google LLC, Mountain View, CA (US)
Filed on Sep. 12, 2022, as Appl. No. 17/942,844.
Application 17/942,844 is a continuation of application No. 16/154,377, filed on Oct. 8, 2018, granted, now 11,443,214.
Application 16/154,377 is a continuation of application No. 14/189,937, filed on Feb. 25, 2014, granted, now 10,095,980.
Application 14/189,937 is a continuation of application No. 13/098,342, filed on Apr. 29, 2011, granted, now 8,700,580, issued on Apr. 15, 2014.
Prior Publication US 2023/0169371 A1, Jun. 1, 2023
This patent is subject to a terminal disclaimer.
Int. Cl. G06F 16/00 (2019.01); G06N 5/048 (2023.01); G09B 29/10 (2006.01); H04W 12/40 (2021.01)
CPC G06N 5/048 (2013.01) [G06F 16/00 (2019.01); G09B 29/106 (2013.01); H04W 12/40 (2021.01)] 20 Claims
OG exemplary drawing
 
1. A computing system for moderating user-generated content, the computing system comprising:
one or more processors; and
one or more non-transitory computer-readable media that collectively store instructions that, when executed by the one or more processors, cause the computing system to perform operations, the operations comprising:
obtaining a proposed user-generated content associated with a feature for inclusion in an online database;
obtaining metadata associated with a user providing the proposed user-generated content, wherein the metadata comprises information associated with previous user interactions;
accessing a reliability engine comprising one or more machine-learned models configured to determine a score indicative of a probability that unreliable information has been provided;
providing the proposed user-generated content to the one or more machine-learned models of the reliability engine;
receiving as an output of the one or more machine-learned models of the reliability engine, and in response to receipt of the proposed user-generated content, an unreliability score indicative of a probability that the user having proposed the user-generated content has provided unreliable information; and
determining to implement an action on the proposed user-generated content based on the metadata associated with the user and the unreliability score as compared to one or more thresholds.