Apache Hudi

Apache Hudi (Hadoop Upserts Deletes and Incrementals) is an open-source transactional data lake framework that enables stream ingestion, upserts, and incremental processing on large datasets stored in data lakes. Originally developed at Uber, Hudi is now widely adopted by organizations like Walmart and Disney for managing rapidly changing data at scale.

Connect to catalog

How It Works

Hudi enables atomic upserts and incremental data processing on cloud object stores by maintaining metadata and write-ahead logs. It supports two primary table types:

  • Copy-on-Write (CoW): Updates are applied at write time; optimized for read-heavy workloads.

  • Merge-on-Read (MoR): Updates are merged at read time; better for write-heavy or frequently changing data.

Each table type supports different query modes:

  • CoW: Read Optimized, Incremental

  • MoR: Read Optimized, Incremental, Real-time

Key Features

  • ACID-compliant transactions on data lakes

  • Built-in support for upserts and deletes

  • Efficient incremental processing and data compaction

  • Support for schema evolution

  • Compatibility with Apache Hive, Presto, Trino, and Spark

  • Configurable indexing strategies for performance optimization

What Is Supported

  • Reading Hudi tables via AWS Glue and Apache Hive

  • Copy-on-Write (CoW) and Merge-on-Read (MoR) table types

  • Partitioned and non-partitioned table support

  • Query modes: Read-Optimized, Incremental, Real-Time (MoR only)

  • Upserts, inserts, deletes

  • Time travel and schema evolution

Unsupported Features in e6data

  • Real-time view querying (MoR) is currently not supported

  • Metadata syncing to non-Hive-compatible metastores may require manual setup

  • Fine-grained time travel features are limited compared to Delta and Iceberg

Last updated