Search papers, labs, and topics across Lattice.
Nezha, a distributed key-value store, addresses the I/O overhead caused by overlapping persistence operations between consensus protocols and storage engines. It achieves this by integrating key-value separation with Raft, redesigning the persistence strategy at the operation level, and incorporating leveled garbage collection. Experiments show that Nezha improves throughput by 460.2% for put, 12.5% for get, and 72.6% for scan operations compared to existing approaches.
Nezha shatters I/O bottlenecks in distributed key-value stores by decoupling key-value persistence within Raft, yielding up to 4.6x throughput gains.
Distributed key-value stores are widely adopted to support elastic big data applications, leveraging purpose-built consensus algorithms like Raft to ensure data consistency. However, through systematic analysis, we reveal a critical performance issue in such consistent stores, i.e., overlapping persistence operations between consensus protocols and underlying storage engines result in significant I/O overhead. To address this issue, we present Nezha, a prototype distributed storage system that innovatively integrates key-value separation with Raft to provide scalable throughput in a strong consistency guarantee. Nezha redesigns the persistence strategy at the operation level and incorporates leveled garbage collection, significantly improving read and write performance while preserving Raft's safety properties. Experimental results demonstrate that, on average, Nezha achieves throughput improvements of 460.2%, 12.5%, and 72.6% for put, get, and scan operations, respectively.