US 11,861,472 B2
Machine learning model abstraction layer for runtime efficiency
Rex Shang, Los Altos, CA (US); Dianhuan Lin, Sunnyvale, CA (US); Changsha Ma, Campbell, CA (US); Douglas A. Koch, Santa Clara, CA (US); Shashank Gupta, San Jose, CA (US); Parnit Sainion, Morgan Hill, CA (US); Visvanathan Thothathri, Santa Clara, CA (US); Narinder Paul, Sunnyvale, CA (US); and Howie Xu, Palo Alto, CA (US)
Assigned to Zscaler, Inc., San Jose, CA (US)
Filed by Zscaler, Inc., San Jose, CA (US)
Filed on Sep. 29, 2022, as Appl. No. 17/956,088.
Application 17/956,088 is a continuation of application No. 17/024,762, filed on Sep. 18, 2020, granted, now 11,475,368.
Application 17/024,762 is a continuation in part of application No. 16/377,129, filed on Apr. 5, 2019, granted, now 11,669,779.
Application 16/377,129 is a continuation in part of application No. 16/902,759, filed on Jun. 16, 2020, granted, now 11,755,726.
Prior Publication US 2023/0018188 A1, Jan. 19, 2023
This patent is subject to a terminal disclaimer.
Int. Cl. G06F 21/00 (2013.01); G06N 20/00 (2019.01); G06F 16/901 (2019.01); G06F 18/214 (2023.01)
CPC G06N 20/00 (2019.01) [G06F 16/9027 (2019.01); G06F 18/214 (2023.01)] 18 Claims
OG exemplary drawing
 
1. A non-transitory computer-readable storage medium having computer-readable code stored thereon for programming a node in a cloud-based system to perform steps of:
receiving a trained machine learning model that has been processed with training information removed therefrom, wherein the training information is utilized in training of the trained machine learning model, and wherein the training information is not relevant to runtime, including features not used in the trained machine learning model;
monitoring traffic, inline at the node, including processing the traffic with the trained machine learning model;
obtaining a verdict on the traffic based on the trained machine learning model; and
performing an action on the traffic based on the verdict.