US 11,720,939 B2
Retail checkout terminal fresh produce identification system
Marcel Herz, Parramatta (AU); and Christopher Sampson, Parramatta (AU)
Assigned to TILITER PTY LTD, Parramatta (AU)
Filed by TILITER PTY LTD, Parramatta (AU)
Filed on Jun. 19, 2020, as Appl. No. 16/906,248.
Application 16/906,248 is a continuation of application No. PCT/AU2018/051369, filed on Dec. 20, 2018.
Claims priority of application No. 2017905146 (AU), filed on Dec. 21, 2017.
Prior Publication US 2020/0364501 A1, Nov. 19, 2020
Int. Cl. G06T 7/00 (2017.01); G06Q 30/06 (2023.01); G06T 7/194 (2017.01); G06T 7/90 (2017.01); G06T 5/50 (2006.01); G06T 7/40 (2017.01); G06F 18/214 (2023.01); G06F 18/24 (2023.01); G06V 10/774 (2022.01); G06V 10/82 (2022.01); G06V 10/44 (2022.01); G06V 20/52 (2022.01); G06V 20/68 (2022.01)
CPC G06Q 30/06 (2013.01) [G06F 18/214 (2023.01); G06F 18/24 (2023.01); G06T 5/50 (2013.01); G06T 7/194 (2017.01); G06T 7/40 (2013.01); G06T 7/90 (2017.01); G06V 10/454 (2022.01); G06V 10/774 (2022.01); G06V 10/82 (2022.01); G06V 20/52 (2022.01); G06T 2207/20081 (2013.01); G06T 2207/20084 (2013.01); G06T 2207/20221 (2013.01); G06V 20/68 (2022.01)] 26 Claims
OG exemplary drawing
 
1. A method of image categorisation, comprising:
in pre-processing:
receiving a plurality of images of backgrounds in which items are to be recognised as a background image set,
receiving a number of original images of items to be recognised in an original item image set,
masking the background of the original images in the original item image set to generate a masked item image set,
digitally augmenting the masked item image set to generate an augmented masked item image set that includes a larger number of images of masked items than the masked item image set,
superimposing each image of the augmented masked item image set on each of the images of the background image set to generate a plurality of new training images as part of a training image set, the training image set thereby providing quantitative variation to train a neural network,
extracting feature vectors from the training image set, wherein extracting the feature vectors includes calculating a colour space histogram;
generating a classification model by training the neural network on the training image set and the extracted feature vectors, wherein the classification model provides a prediction of an image's categorisation;
embedding the classification model in a processor; and
receiving an image for categorisation, wherein the processor is in communication with a Point-of-Sale (POS) system, the processor running the classification model on the received image to provide output to the POS system of a prediction of the image's categorisation.