US 12,462,480 B2
Image processing method
Archil Tsiskaridze, Exeter (GB); and Philippe Georges Young, Exeter (GB)
Assigned to Vitaware Ltd, Exeter (GB)
Appl. No. 18/564,209
Filed by Vitaware Ltd, Exeter (GB)
PCT Filed May 24, 2022, PCT No. PCT/EP2022/064119
§ 371(c)(1), (2) Date Nov. 27, 2023,
PCT Pub. No. WO2022/248508, PCT Pub. Date Dec. 1, 2022.
Claims priority of application No. 2107492 (GB), filed on May 26, 2021.
Prior Publication US 2024/0249471 A1, Jul. 25, 2024
Int. Cl. G06T 17/00 (2006.01); A61B 6/40 (2024.01); A61B 6/51 (2024.01); A61C 13/34 (2006.01); G06V 10/26 (2022.01); G06V 10/764 (2022.01); G06V 10/774 (2022.01); G06V 10/82 (2022.01)
CPC G06T 17/00 (2013.01) [A61B 6/4085 (2013.01); A61B 6/51 (2024.01); A61C 13/34 (2013.01); G06V 10/26 (2022.01); G06V 10/764 (2022.01); G06V 10/774 (2022.01); G06V 10/82 (2022.01); G06T 2210/41 (2013.01); G06V 2201/03 (2022.01)] 20 Claims
OG exemplary drawing
 
1. A method for training a machine learning inference model, wherein the trained machine learning inference model is for use in generating a representation of a virtual three-dimensional cast of an internal structure of an individual's intra-oral tissue from a computer-readable surface representation of exposed portions of the individual's intra-oral tissue, the method comprising:
obtaining first three-dimensional image data representing tissue characteristics within an intra-oral cavity for a plurality of individuals;
obtaining second three-dimensional image data representing a shape of a volume enclosed by an external surface of intra-oral tissue that is exposed within the intra-oral cavity for the plurality of individuals, wherein the second three-dimensional image data is of exposed intra-oral tissue that comprises both hard tissue and soft tissue, and wherein the first three-dimensional image data and second three-dimensional image data for each of the plurality of individuals form a co-registered pair; and
training the machine learning inference model using a training set obtained from the co-registered pairs of first three-dimensional image data and second three-dimensional image data for the plurality of individuals, wherein the first three-dimensional image data is used as a target for the training.