| CPC H04L 51/212 (2022.05) [G06F 3/04817 (2013.01); G06F 3/04842 (2013.01); H04L 51/046 (2013.01); H04L 51/063 (2013.01); H04L 51/52 (2022.05)] | 18 Claims |

|
1. A method for identifying offensive message content comprising:
for each particular responsive message of a plurality of responsive messages received in response to an initial message:
providing, by one or more computers, content of the particular responsive message as an input to a machine learning model that has been trained to predict a likelihood that an initial message provided by a first user device includes offensive content based on processing of responsive message content from a different user device provided in response to the initial message;
processing, by one or more computers, the content of the particular responsive message through the machine learning model to generate output data indicating a likelihood that the initial message includes offensive content; and
storing, by one or more computers, the generated output data;
determining, by one or more computers and based on the output data generated for each of the plurality of responsive messages, whether the initial message likely includes offensive content; and
based on a determination, by one or more computers, that the output data generated for each of the plurality of responsive messages indicates that the initial message likely includes offensive content, performing, by one or more computers, one or more remedial operations to mitigate exposure to the offensive content, wherein performing the one or more remedial operations comprises:
adjusting, using one or more computers, a content score associated with the initial message content, wherein the adjusted content scores causes the initial message content to be demoted in list of content items.
|