US 11,943,294 B1
Storage medium and compression for object stores
Christoph Bartenstein, Seattle, WA (US); Brad E Marshall, Bainbridge Island, WA (US); and Andrew Kent Warfield, Vancouver (CA)
Assigned to Amazon Technologies, Inc., Seattle, WA (US)
Filed by Amazon Technologies, Inc., Seattle, WA (US)
Filed on Sep. 30, 2020, as Appl. No. 17/039,938.
Int. Cl. H04L 67/1097 (2022.01); G06N 20/00 (2019.01); H04L 67/5651 (2022.01); H04L 67/75 (2022.01); H04L 69/04 (2022.01)
CPC H04L 67/1097 (2013.01) [G06N 20/00 (2019.01); H04L 67/5651 (2022.05); H04L 67/75 (2022.05); H04L 69/04 (2013.01)] 19 Claims
OG exemplary drawing
 
1. A system, comprising:
a plurality of computing devices, respectively comprising at least one processor and a memory, the plurality of computing devices configured to implement an object-based data storage system of a cloud-based provider network that stores a plurality of objects, wherein the object-based data storage system is configured to:
monitor access to the plurality of objects to determine one or more characteristics of a first object of the plurality of objects stored in the object-based data storage system of the cloud-based provider network;
generate a compression decision for the first data object using a machine learning model, wherein to generate the compression decision, the machine learning model:
accepts as input the one or more determined characteristics of the first data object to incorporate a prediction of future access to the first object stored in the object-based data storage system of the cloud-based provider network as a basis for the compression decision; and
wherein the compression decision generated by the machine learning model includes both an indication to compress the first data object and a compression algorithm to be performed on the first data object;
generate a compressed version of the first object according to the indication in the compression decision to perform compression on the first data object using the compression algorithm in the compression decision generated by the machine learning model; and
select a different location to store the compressed version of the first object that provides different access performance than a current location of the first object.