US 11,989,628 B2
Machine teaching complex concepts assisted by computer vision and knowledge reasoning
Emilio Ashton Vital Brazil, Rio de Janeiro (BR); Rodrigo da Silva Ferreira, Rio de Janeiro (BR); Viviane Torres da Silva, Laranjeiras (BR); Renato Fontoura de Gusmao Cerqueira, Rio de Janeiro (BR); Raphael Melo Thiago, Rio de Janeiro (BR); Elton Figueiredo de Souza Soares, Rio de Janeiro (BR); Leonardo Guerreiro Azevedo, Rio de Janeiro (BR); Vinicius Costa Villas Boas Segura, Rio de Janeiro (BR); Ana Fucs, Rio de Janeiro (BR); Juliana Jansen Ferreira, Rio de Janeiro (BR); Joana de Noronha Ribeiro de Almeida, Lisbon (PT); Bruno Felix Carvalho, Lisbon (PT); Dario Sergio Cersosimo, Lisbon (PT); and Marco Daniel Melo Ferraz, Oeiras (PT)
Assigned to International Business Machines Corporation, Armonk, NY (US)
Filed by International Business Machines Corporation, Armonk, NY (US); and Petrogal Brasil S.A., Rio de Janeiro (BR)
Filed on Mar. 5, 2021, as Appl. No. 17/193,697.
Prior Publication US 2022/0284343 A1, Sep. 8, 2022
Int. Cl. G06T 7/11 (2017.01); G06F 18/21 (2023.01); G06F 18/22 (2023.01); G06N 5/04 (2023.01); G06N 20/00 (2019.01)
CPC G06N 20/00 (2019.01) [G06F 18/2178 (2023.01); G06F 18/22 (2023.01); G06N 5/04 (2013.01); G06T 7/11 (2017.01)] 19 Claims
OG exemplary drawing
 
1. A computer-implemented method comprising:
receiving a user annotated concept for a given image in given context;
decomposing the given image into parts;
classifying the parts;
creating relationships among the parts and between the parts and the given context;
storing at least the relationships with the user annotated concept in a knowledge base;
retrieving a second image associated the given context;
decomposing the second image into parts;
classifying the second image's parts;
determining relationships among the second image's parts and between the second image's parts and the given context;
comparing classifications of the second image's parts, the relationships among the second image's parts and between the second image's parts and the given context, with classifications of the given image's parts, the relationships among the given image's parts and between the given image's parts and the given context; and
based on the comparing and responsive to determining that the created relationships associated with the given image and the determined relationships associated with the second image are similar based on a similarity threshold, annotating the second image with the user annotated concept for the given image.