US 12,073,605 B1
Attributing aspects of generated visual contents to training examples
Yair Adato, Kfar Ben Nun (IL); Michael Feinstein, Tel Aviv (IL); Nimrod Sarid, Tel Aviv (IL); Ron Mokady, Ramat Hasaron (IL); Eyal Gutflaish, Beer Sheva (IL); and Vered Horesh-Yaniv, Tel Aviv (IL)
Assigned to BRIA ARTIFICIAL INTELLIGENCE LTD., Tel Aviv (IL)
Filed by BRIA ARTIFICIAL INTELLIGENCE LTD., Tel Aviv (IL)
Filed on Nov. 7, 2023, as Appl. No. 18/387,677.
Application 18/387,677 is a continuation of application No. PCT/IL2023/051132, filed on Nov. 5, 2023.
Claims priority of provisional application 63/525,754, filed on Jul. 10, 2023.
Claims priority of provisional application 63/444,805, filed on Feb. 10, 2023.
This patent is subject to a terminal disclaimer.
Int. Cl. G06V 10/764 (2022.01); G06V 10/774 (2022.01)
CPC G06V 10/764 (2022.01) [G06V 10/774 (2022.01)] 19 Claims
OG exemplary drawing
 
1. A non-transitory computer readable medium storing a software program comprising data and computer implementable instructions that when executed by at least one processor cause the at least one processor to perform operations for attributing aspects of generated visual contents to training examples, the operations comprising:
receiving a first visual content generated using a generative model, the generative model is a result of training a machine learning model using a plurality of training examples, each training example of the plurality of training examples is associated with a respective visual content;
determining one or more properties of a first aspect of the first visual content;
determining one or more properties of a second aspect of the first visual content;
for each training example of the plurality of training examples, analyzing the respective visual content to determine one or more properties of the respective visual content;
using the one or more properties of the first aspect of the first visual content and the properties of the visual contents associated with the plurality of training examples to attribute the first aspect of the first visual content to a first subgroup of at least one but not all of the plurality of training examples;
using the one or more properties of the second aspect of the first visual content and the properties of the visual contents associated with the plurality of training examples to attribute the second aspect of the first visual content to a second subgroup of at least one but not all of the plurality of training examples;
determining that the at least one visual content associated with the training examples of the first subgroup are associated with a first at least one source;
determining that the at least one visual content associated with the training examples of the second subgroup are associated with a second at least one source;
for each source of the first at least one source, updating a data-record associated with the source based on the attribution of the first aspect of the first visual content; and
for each source of the second at least one source, updating a data-record associated with the source based on the attribution of the second aspect of the first visual content,
wherein the training of the machine learning model to obtain the generative model includes a first training step and a second training step, the first training step uses a third subgroup of the plurality of training examples to obtain an intermediate model, the second training step uses a fourth subgroup of the plurality of training examples and uses the intermediate model for initialization to obtain the generative model, the fourth subgroup differs from the third subgroup, and wherein the operations further comprise:
comparing a result associated with the first visual content and the intermediate model with a result associated with the first visual content and the generative model; and
for each training example of the fourth subgroup, determining whether to attribute the first aspect of the first visual content to the respective training example based on a result of the comparison.