Search papers, labs, and topics across Lattice.
This paper introduces ReLMXEL, a multi-agent reinforcement learning framework embedded within the memory controller to dynamically optimize its parameters for reduced latency and energy consumption. ReLMXEL uses reward decomposition and detailed memory behavior metrics to guide the RL agents' decision-making process. Experiments across various workloads show consistent performance improvements over baseline memory controller configurations, with the added benefit of explainable control decisions.
Achieve significant latency and energy savings in memory systems with an RL-based controller that also provides insights into *why* its decisions are optimal.
Reducing latency and energy consumption is critical to improving the efficiency of memory systems in modern computing. This work introduces ReLMXEL (Reinforcement Learning for Memory Controller with Explainable Energy and Latency Optimization), a explainable multi-agent online reinforcement learning framework that dynamically optimizes memory controller parameters using reward decomposition. ReLMXEL operates within the memory controller, leveraging detailed memory behavior metrics to guide decision-making. Experimental evaluations across diverse workloads demonstrate consistent performance gains over baseline configurations, with refinements driven by workload-specific memory access behaviour. By incorporating explainability into the learning process, ReLMXEL not only enhances performance but also increases the transparency of control decisions, paving the way for more accountable and adaptive memory system designs.