US 11,934,672 B2
Cached workload management for a multi-tenant host
Kushal S. Patel, Pune (IN); Ankur Srivastava, Pune (IN); Subhojit Roy, Pune (IN); and Sarvesh S. Patel, Pune (IN)
Assigned to International Business Machines Corporation, Armonk, NY (US)
Filed by International Business Machines Corporation, Armonk, NY (US)
Filed on Aug. 26, 2021, as Appl. No. 17/412,361.
Prior Publication US 2023/0060575 A1, Mar. 2, 2023
Int. Cl. G06F 12/00 (2006.01); G06F 3/06 (2006.01)
CPC G06F 3/0635 (2013.01) [G06F 3/0611 (2013.01); G06F 3/0644 (2013.01); G06F 3/0659 (2013.01); G06F 3/067 (2013.01)] 20 Claims
OG exemplary drawing
 
15. A computer system for improving cached workload management, the computer system comprising one or more processors, one or more computer readable tangible storage devices, and program instructions stored on at least one of the one or more computer readable tangible storage devices for execution by at least one of the one or more processors, the program instructions executable to:
obtain, by a host in a system comprising the host and a storage system, information about classes of applications accessing the storage system;
determine, by the host, input/output queues dedicated to respective ones of the classes;
send to the storage system, by the host, the information about the classes;
based on the information about the class, divide, by the storage system, a dynamic random-access memory (DRAM) cache in the storage system into multiple cache partitions, wherein the multiple cache partitions are dedicated to the respective ones of the classes;
create, by the host, the input/output queues and set bit flags for respective ones of the input/output queues;
pump, by the host, inputs/outputs coming from the respective ones of the classes to the respective ones of the input/output queues;
direct, by the storage system, the input/output queues to respective ones of the multiple cache partitions of the DRAM cache in the storage system;
poll, by the host, a hit/miss status of a respective one of the input/output queues;
in response to a respective one of the input/output queues is mapped as cache_disabled, bypass, by the storage system, the multiple cache partitions of the DRAM cache in the storage system of the respective one of the input/output queues.