The data access patterns of modern workloads are increasingly less uniform which makes it hard to design a memory hierarchy with rigid design principles that performs optimally for a wide range of workloads. This dissertation proposes and evaluates the benefits of a novel architecture, called the Amoeba Cache, for the on chip memory hierarchy which would allow it to dynamically adapt to the requirements of the application. We propose a design that can support a variable number of cache blocks, each of a different granularity. Compared to a fixed granularity cache, the Amoeba Cache improves cache utilization to 90% - 99% for most applications, saves miss rate by up to 73% at the L1 level and up to 88% at the LLC level, and reduces miss bandwidth by up to 84% at the L1 and 92% at the LLC. The Amoeba Cache also reduces on-chip memory hierarchy energy by as much as 36% and improves performance by as much as 50%.
Copyright is held by the author.
The author granted permission for the file to be printed and for the text to be copied and pasted.
Supervisor or Senior Supervisor
Thesis advisor: Shriraman, Arrvindh
Member of collection