US 11,941,724 B2
Model inference method and apparatus based on graphics rendering pipeline, and storage medium
Xindong Shi, Hangzhou (CN); Shu Wang, Hangzhou (CN); Jiangzheng Wu, Hangzhou (CN); and Mingwei Lu, Hangzhou (CN)
Assigned to HUAWEI TECHNOLOGIES CO., LTD., Shenzhen (CN)
Filed by HUAWEI TECHNOLOGIES CO., LTD., Guangdong (CN)
Filed on Feb. 7, 2022, as Appl. No. 17/665,678.
Application 17/665,678 is a continuation of application No. PCT/CN2020/100759, filed on Jul. 8, 2020.
Claims priority of application No. 201910730619.0 (CN), filed on Aug. 8, 2019.
Prior Publication US 2022/0156878 A1, May 19, 2022
Int. Cl. G06N 3/04 (2023.01); G06N 3/08 (2023.01); G06T 1/20 (2006.01); G06T 1/60 (2006.01); G06T 3/40 (2006.01); G06T 7/40 (2017.01); G06V 10/82 (2022.01)
CPC G06T 1/20 (2013.01) [G06T 1/60 (2013.01); G06T 3/40 (2013.01); G06T 7/40 (2013.01)] 20 Claims
OG exemplary drawing
 
1. A model inference method based on a graphics rendering pipeline, comprising:
obtaining an instruction stream in a render thread;
extracting and saving texture data information from the instruction stream, wherein the texture data information comprises texture data; and
inputting the texture data information to a graphics processing unit (GPU) rendering pipeline, wherein the GPU rendering pipeline performs GPU model-based inference on the texture data based on a GPU model to obtain an inference result of the texture data, and the GPU model is a model running in a GPU,
wherein the method further comprises:
before inputting the texture data information to the GPU rendering pipeline, obtaining and saving an ID of a texture in an activated state; and
obtaining the inference result by using the render thread, wherein the inference result is saved in a texture data format.