CPC G06N 3/08 (2013.01) [G06F 8/433 (2013.01); G06F 8/458 (2013.01); G06N 3/04 (2013.01)] | 20 Claims |
1. A system, comprising:
a machine learning accelerator (MLA) hardware configured to perform machine-learning operations according to native instructions;
an interpreter computing module configured to:
generate, based on virtual instructions, machine language instructions configured to be processed by a processing hardware implementing the interpreter computing module; and
cause the processing hardware to perform machine-learning operations according to the machine language instructions; and
a compiler computing module associated with the MLA hardware, the compiler computing module configured to:
receive instructions for performing an inference using a machine-learning model;
based on the received instructions:
generate the native instructions configured to be processed by the MLA hardware, the native instructions specifying first machine-learning operations associated with performing the inference; and
generate the virtual instructions configured to be processed by the interpreter computing module, the virtual instructions specifying second machine-learning operations associated with performing the inference.
|