| CPC G06F 8/65 (2013.01) | 20 Claims |

|
1. A system, comprising:
one or more processors and one or more memories, wherein the one or more memories have stored thereon instructions, which when executed by the one or more processors, cause the one or more processors to implement a feature deployment service of a provider network, wherein the feature deployment service is configured to:
store a feature processing unit (FPU), wherein the FPU comprises:
a model to implement a data processing feature; and
compute logic to implement the data processing feature; and
deploy the FPU to a plurality of remote edge devices of a client of the feature deployment service, wherein for a given edge device of the plurality of remote edge devices, a feature-independent portion of the compute logic is configured to be executed by the edge device using a data processing abstraction application programming interface (API) of an edge FPU engine of the edge device and a feature-specific portion of the compute logic is configured to be executed by the edge device using the data processing abstraction API of the edge FPU engine, and
wherein the data processing abstraction API of the edge FPU engine and a data processing abstraction API of a local FPU engine of the feature deployment service conform to a common API specification.
|