US 12,355,882 B2
Machine learning based system for optimized central processing unit (CPU) utilization in data transformation
Elvis Nyamwange, Little Elm, TX (US); Sailesh Vezzu, Hillsborough, NJ (US); Amer Ali, Jersey City, NJ (US); Rahul Shashidhar Phadnis, Charlotte, NC (US); Rahul Yaksh, Austin, TX (US); Hari Vuppala, Concord, NC (US); Pratap Dande, Saint Johns, FL (US); Brian Neal Jacobson, Los Angeles, CA (US); and Erik Dahl, Newark, DE (US)
Assigned to BANK OF AMERICA CORPORATION, Charlotte, NC (US)
Filed by BANK OF AMERICA CORPORATION, Charlotte, NC (US)
Filed on Jun. 13, 2023, as Appl. No. 18/209,022.
Prior Publication US 2024/0421994 A1, Dec. 19, 2024
Int. Cl. H04L 9/32 (2006.01); G06N 20/00 (2019.01)
CPC H04L 9/32 (2013.01) [G06N 20/00 (2019.01)] 20 Claims
OG exemplary drawing
 
17. A method for a machine learning based system for optimized CPU utilization in data transformation, the method comprising:
receiving a new data segment;
retrieving characteristics of the new data segment;
determining, using a trained machine learning model, an encryption algorithm, and a compression algorithm for implementation on the new data segment based on at least the characteristics of the new data segment;
determining, using the trained machine learning model, an order of implementation associated with the implementation of the encryption algorithm and the compression algorithm; and
implementing the encryption algorithm and the compression algorithm on the new data segment in the determined order of implementation,
wherein the encryption algorithm is determined based on a minimum number of CPU cycles needed to encrypt the new data segment,
wherein the compression algorithm is determined based on a minimum number of CPU cycles needed to compress the new data segment, and
wherein the order of implementation is determined based on a minimum number of CPU cycles needed to compress and encrypt the new data segment.