US 11,922,269 B2
Reading out optically readable codes
Benjamin Hoetter, Cologne (DE); Gregor Fischer, Overath (DE); Klaus Ruelberg, Bornheim (DE); and Dirk Schäfer, Kerpen (DE)
Assigned to BAYER AKTIENGESELLSCHAFT, Leverkusen (DE)
Filed by BAYER AKTIENGESELLSCHAFT, Leverkusen (DE)
Filed on Dec. 15, 2022, as Appl. No. 18/081,839.
Claims priority of application No. 21216812 (EP), filed on Dec. 22, 2021; application No. 21216923 (EP), filed on Dec. 22, 2021; and application No. 22156217 (EP), filed on Feb. 10, 2022.
Prior Publication US 2023/0196044 A1, Jun. 22, 2023
Int. Cl. G06K 7/14 (2006.01)
CPC G06K 7/1417 (2013.01) [G06K 7/1413 (2013.01)] 15 Claims
OG exemplary drawing
 
1. A computer-implemented method, comprising the steps of:
receiving at least one image recording of an object, wherein the object comprises an optically readable code, wherein the optically readable code is introduced into a surface of the object;
identifying the object on the basis of the at least one image recording;
reading out transformation parameters for the identified object from a database;
carrying out one or more transformations of the at least one image recording in accordance with the transformation parameters and generating a transformed image recording in the process, wherein at least one transformation is carried out with the aid of a trained machine learning model, wherein the trained machine learning model was trained on the basis of training data, wherein the training data for each object of a multiplicity of objects comprise i) at least one reference image recording of an optical code introduced into a surface of the object and ii) a transformed reference image recording of the optical code, wherein decoding the optical code in the transformed reference image recording generates fewer decoding errors than decoding the optical code in the reference image recording, wherein the training for each object of the multiplicity of objects comprises:
inputting the at least one reference image recording into the machine learning model;
receiving a predicted transformed reference image recording from the machine learning model;
calculating a deviation between the transformed reference image recording and the predicted transformed reference image recording;
modifying the model parameters with regard to reducing the deviation; and
decoding the optically readable code imaged in the transformed image recording.