US 11,868,264 B2
Sector cache for compression
Abhishek R. Appu, El Dorado Hills, CA (US); Altug Koker, El Dorado Hills, CA (US); Joydeep Ray, Folsom, CA (US); David Puffer, Tempe, AZ (US); Prasoonkumar Surti, Folsom, CA (US); Lakshminarayanan Striramassarma, El Dorado Hills, CA (US); Vasanth Ranganathan, El Dorado Hills, CA (US); Kiran C. Veernapu, Bangalore (IN); Balaji Vembu, Folsom, CA (US); and Pattabhiraman K, Bangalore (IN)
Assigned to Intel Corporation, Santa Clara, CA (US)
Filed by Intel Corporation, Santa Clara, CA (US)
Filed on Feb. 13, 2023, as Appl. No. 18/168,157.
Application 18/168,157 is a continuation of application No. 17/400,415, filed on Aug. 12, 2021, granted, now 11,593,269.
Application 17/400,415 is a continuation of application No. 17/191,473, filed on Mar. 3, 2021, granted, now 11,586,548.
Application 17/191,473 is a continuation of application No. 17/026,264, filed on Sep. 20, 2020, granted, now 11,263,141, issued on Mar. 1, 2022.
Application 17/026,264 is a continuation of application No. 16/702,073, filed on Dec. 3, 2019, granted, now 10,783,084.
Application 16/702,073 is a continuation of application No. 15/477,058, filed on Apr. 1, 2017, granted, now 10,503,652, issued on Dec. 10, 2019.
Prior Publication US 2023/0259458 A1, Aug. 17, 2023
This patent is subject to a terminal disclaimer.
Int. Cl. G06F 12/0877 (2016.01); G06F 12/0802 (2016.01); G06F 12/0855 (2016.01); G06F 12/0806 (2016.01); G06F 12/0846 (2016.01); G06F 12/0868 (2016.01); G06T 1/60 (2006.01); G06F 12/126 (2016.01); G06F 12/0893 (2016.01)
CPC G06F 12/0877 (2013.01) [G06F 12/0802 (2013.01); G06F 12/0806 (2013.01); G06F 12/0848 (2013.01); G06F 12/0855 (2013.01); G06F 12/0868 (2013.01); G06F 12/126 (2013.01); G06T 1/60 (2013.01); G06F 12/0893 (2013.01)] 20 Claims
OG exemplary drawing
 
1. A general-purpose graphics processor comprising:
a memory interface;
a cache memory communicatively coupled with the memory interface;
a processing resource communicatively coupled with the memory interface and the cache memory, the processing resource to perform a general-purpose compute operation; and circuitry communicatively coupled with the cache memory and the memory interface, the circuitry to:
compress compute data at cache sector granularity, the cache sector granularity is a sub-block granularity, said compress compute data including compress multiple cache lines of the compute data associated with a sector before a write of the compressed compute data associated with the sector via the memory interface,
in association with a read of the compressed compute data associated with the multiple cache lines via the memory interface, decompress the compressed compute data to generate decompressed compute data, and
provide the decompressed compute data to the processing resource for performance of the general-purpose compute operation.