US 12,132,791 B2
Communication protocol, and a method thereof for accelerating artificial intelligence processing tasks
Moshe Tanach, Bet Herut (IL); Yossi Kasus, Haifa (IL); Lior Khermosh, Givatayim (IL); and Udi Sivan, Zikhron Yaakov (IL)
Assigned to NEUREALITY LTD., Caesarea (IL)
Filed by NeuReality Ltd., Caesarea (IL)
Filed on Dec. 22, 2022, as Appl. No. 18/145,516.
Application 18/145,516 is a continuation of application No. 17/387,536, filed on Jul. 28, 2021, granted, now 11,570,257.
Claims priority of provisional application 63/070,054, filed on Aug. 25, 2020.
Prior Publication US 2023/0130964 A1, Apr. 27, 2023
This patent is subject to a terminal disclaimer.
Int. Cl. H04L 67/148 (2022.01); G06F 9/48 (2006.01); G06F 9/50 (2006.01); G06F 15/173 (2006.01); H04L 67/133 (2022.01)
CPC H04L 67/148 (2013.01) [G06F 9/4806 (2013.01); G06F 9/505 (2013.01); G06F 15/17331 (2013.01); H04L 67/133 (2022.05)] 11 Claims
OG exemplary drawing
 
1. A method for communicating artificial intelligence (AI) tasks for a server chaining, comprising:
establishing a first connection between an AI client and a first AI server;
encapsulating a request to process an AI task in at least one request data frame compliant with a communication protocol;
transporting the at least one request data frame over a network using a transport protocol over the first connection to the first AI server, wherein the first AI server spans the AI task over at least one second AI server, wherein the transport protocol provisions transport characteristics of the AI task and the transport protocol is different than the communication protocol, wherein AI task includes processing of a single compute graph thereby allow spanning the processing of the compute graph over one more AI servers;
establishing a second connection between the first AI server and the at least one second AI server;
transporting the at least one request data frame over using the transport protocol over the second connection;
defining a plurality of queues to support messages exchanged between the AI client and first AI server; and
defining a plurality of queues to support messages exchanged between the first AI server and each of the second AI server, wherein each of the plurality of queues are allowed to differentiate different users, flows, AI tasks, and service priorities.