| CPC G06F 9/5016 (2013.01) [G06F 9/4881 (2013.01); G06F 2209/485 (2013.01); G06F 2209/5018 (2013.01)] | 20 Claims |

|
1. A system comprising:
one or more processors; and
one or more non-transitory computer-readable media storing computing instructions configured to run on the one or more processors and perform:
ingesting streaming events for processing by multiple models, wherein each of the multiple models performs a respective machine-learning inferencing;
mapping each of the streaming events to a model of the multiple models; and
storing each of the streaming events in a respective queue in a respective sequence store of multiple sequence stores, such that a respective one of the multiple models retrieves (i) a respective one of the streaming events in the respective sequence store associated with the respective one of the multiple models and (ii) a respective key corresponding to the respective one of the streaming events from a leaf store, to asynchronously perform the respective machine-learning inferencing based on content of the respective one of the streaming events, wherein the multiple models run independently and in parallel on multi-tenant threads.
|