Fast RDMA-based Ordered Key-Value Store using Remote Learned Cache
Research question: RDMA for ordered key-value stores.
Current approaches: Index caching to reduce RDMA operations. (1) The tree-based index can be large, so that the cache would suffer from unavoidable capacity misses. (2) The cache would aggravate random memory accesses and further increase the end-to-end latency. (3) Updating the tree-based index may recursively invalidate the cache and cause false invalidation due to path sharing.
Idea: leverage ML models as the (client-side) RDMA-based cache for the (server-side) tree-based index.
Challenges: Although using ML models as the index seems efficient (a few floating/int operations) and cheap (a small memory footprint) for static workloads (e.g., gets), it is also notoriously slow (frequently retraining ML models) and costly (keeping data in order) for dynamic workloads (e.g., inserts) => Hybrid architecture for static and dynamic workloads.
Last updated