DiSC at SIGMOD 2024

Last week in #SIGMOD2024, #DiSC participated with two research papers in the main conference, two workshop papers in #DBTest, and one invited talk in the 20th instance of #DAMON.

Papon presented our (joint with Taishan Chen and Shuo Zhang) work on “CAVE: Concurrency-Aware Graph Processing on SSDs” that proposes new variations of popular graph traversal algorithms that are designed to better exploit the parallelism that modern SSDs can natively support. We apply our techniques on single-node out-of-core systems and we show up to two orders of magnitude performance improvement over the state of the art. The core idea is to identify subgraphs that can be traversed in parallel (for DFS-style traversal) or “fronts of nodes” that can all be accessed in parallel (for BFS-style traversal). Throughout this work, it is crucial to carefully tune our system to use the parallelism of the device without overburdening it with too many concurrent accesses, as we have shown previously.

Zichen presented our (joint with Xiao Hu) work on “NOCAP: Near-Optimal Correlation-Aware Partitioning Joins” that proposes a new disk-based row-oriented join algorithm that uses the join correlation to calculate the optimal partitioning strategy to minimize the I/O cost (taking also into account the potential asymmetry between reads and writes on the underlying SSD-based storage). The core idea of this work is to partition tuples using the join correlation so as to group together rows with a similar number of join matches. We prove that this is the optimal strategy, and we further show that it outperforms the state of the art (Dynamic Hybrid Hash Join) for any memory budget! Finally, we develop a practical algorithm that can maintain an arbitrarily small memory budget while still outperforming the state of the art. The paper has most easter eggs like a simple skew-aware hash-based join partitioning strategy that we term “Rounded Hashing” that avoids repartitioning and offers better join performance when skew creates partitions larger than the available buffer space.

Zichen also presented our (joint with Arpita and Subhadeep) work on “KVBench: A Key-Value Benchmarking Suite” at #DBTest, a novel synthetic workload generator that fills the gap between classical key-value workload generators and more complex real-life workloads. KVBench supports a wide range of operations, including point queries, range queries, inserts, updates, deletes, range deletes, and among these options, inserts, queries, and updates can be customized by different distributions. In this paper and its artifacts, we share the benchmarking tool that we have been using in a series of papers on key-value stores.

Aneesh presented our (joint with Andy and Jinqi) work on “Benchmarking Learned and LSM Indexes on Data Sortedness” at #DBTest, where we experiment with two update-friendlylearned indexes (ALEX and LIPP) and a classical write-intensive index (LSM-tree) with different levels of data pre-sortedness for ingestion. For the last few years, we have been studying the impact of data sortedness on indexes (e.g., “Indexing for Near-Sorted Data”), and in this work, we treated the three benchmarked approaches as black-box systems, and we wanted to figure out how well they can adapt to sortedness. Needless to say, there is room for improvement in all systems and a surprising new result: when comparing two approaches for different levels of sortedness, the comparison may vary dramatically. For example, LIPP can be anywhere between 4x faster and 1.9x slower than ALEX.

Lastly, during #DAMON, I presented a fresh thinking talk with the title “Effortless Locality Through On-the-fly Data Transformation”. The talk summarized our new exciting research project on software/hardware co-design, where we build an on-the-fly data transformation engine that sits between the CPU and the memory to ensure that only the ideal data layout (and data elements) are propagated through the cache hierarchy to the CPU. For relational data systems, this allows for the ideal hybrid layout to be available for each query, grossly simplifying the data system software complexity while offering ideal data locality at all times. For other data-intensive applications (e.g., managing tensors), this ensures effortless locality on selective data accesses (accessing slices or fibers of tensors). This talk is based on a research project we started with Renato Mancuso that has led to a series of papers over the past few years: “Relational Memory: Native In-Memory Accesses on Rows and Columns“, “Hardware Data Re-organization Engine for Real-Time Systems“, “Transparent Data Transformation“, “Relational Fabric: Transparent Data Transformation“, “Software-Shaped Platforms“, and “On-the-fly Data Transformation in Action“.

Leave a Reply

Your email address will not be published. Required fields are marked *