Skip to content
Iceberg is a table format for large, slow-moving tabular data
Branch: master
Clone or download
Type Name Latest commit message Commit time
Failed to load latest commit information.
api/src Use manifest lists by default and fix tests. Dec 13, 2018
common/src/main/java/com/netflix/iceberg/common Add Parquet file support to iceberg-data. Sep 11, 2018
core/src Filter manifests with the data filter. Dec 18, 2018
data/src Add pluggable file I/O submodule in TableOperations (#14) Dec 12, 2018
docs/current/javadoc Update Javadoc for 0.3.0. Sep 26, 2018
examples Update Spark utils and add a table conversion notebook. Jan 10, 2018
gradle/wrapper Add gradle wrapper. Jul 18, 2018
hive/src Add pluggable file I/O submodule in TableOperations (#14) Dec 12, 2018
images Add logo images to repo and README. Jul 21, 2018
orc/src/main/java/com/netflix/iceberg/orc Use regular Avro maps when keys are strings. Apr 18, 2018
parquet/src Moving off of guava emptyIterator to collections.emptyIterator() as g… Oct 3, 2018
pig/src Fix type handling in Spark and Pig. Dec 12, 2018
spark/src Fix type handling in Spark and Pig. Dec 12, 2018
.gitignore Initial pass at adding ORC to Iceberg. Mar 13, 2018
.travis.yml Fix nebula plugin and Travis CI. Sep 26, 2018
LICENSE Update LICENSE. Sep 30, 2018
NOTICE Implement Avro write path for Spark source. Dec 22, 2017
OSSMETADATA Create OSSMETADATA Jan 19, 2018 Update README to point to ASF project. Dec 5, 2018
build.gradle Add task dependency on shadowJar to install. Nov 13, 2018
gradlew Add gradle wrapper. Jul 18, 2018
settings.gradle Presto runtime jar that shades dependencies that conflict with presto… Oct 3, 2018

Iceberg has moved! Iceberg has been donated to the Apache Software Foundation.

Please use the new Apache mailing lists, site, and repository:

Iceberg is a new table format for storing large, slow-moving tabular data. It is designed to improve on the de-facto standard table layout built into Hive, Presto, and Spark.


Iceberg is under active development at Netflix.

The core Java library that tracks table snapshots and metadata is complete, but still evolving. Current work is focused on integrating Iceberg into Spark and Presto.

The Iceberg format specification is being actively updated and is open for comment. Until the specification is complete and released, it carries no compatibility guarantees. The spec is currently evolving as the Java reference implementation changes.

Java API javadocs are available for the 0.3.0 (latest) release.


We welcome collaboration on both the Iceberg library and specification. The draft spec is open for comments.

For other discussion, please use the Iceberg mailing list or open issues on the Iceberg github page.


Iceberg is built using Gradle 4.4.

Iceberg table support is organized in library modules:

  • iceberg-common contains utility classes used in other modules
  • iceberg-api contains the public Iceberg API
  • iceberg-core contains implementations of the Iceberg API and support for Avro data files, this is what processing engines should depend on
  • iceberg-parquet is an optional module for working with tables backed by Parquet files
  • iceberg-orc is an optional module for working with tables backed by ORC files (experimental)
  • iceberg-hive is am implementation of iceberg tables backed by hive metastore thrift client

This project Iceberg also has modules for adding Iceberg support to processing engines:

  • iceberg-spark is an implementation of Spark's Datasource V2 API for Iceberg (use iceberg-runtime for a shaded version)
  • iceberg-data is a client library used to read Iceberg tables from JVM applications
  • iceberg-pig is an implementation of Pig's LoadFunc API for Iceberg
  • iceberg-presto-runtime generates a shaded runtime jar that is used by presto to integrate with iceberg tables


Iceberg's Spark integration is compatible with the following Spark versions:

Iceberg version Spark version
0.2.0+ 2.3.0
0.3.0+ 2.3.2

About Iceberg


Iceberg tracks individual data files in a table instead of directories. This allows writers to create data files in-place and only adds files to the table in an explicit commit.

Table state is maintained in metadata files. All changes to table state create a new metadata file and replace the old metadata with an atomic operation. The table metadata file tracks the table schema, partitioning config, other properties, and snapshots of the table contents. Each snapshot is a complete set of data files in the table at some point in time. Snapshots are listed in the metadata file, but the files in a snapshot are stored in separate manifest files.

The atomic transitions from one table metadata file to the next provide snapshot isolation. Readers use the snapshot that was current when they load the table metadata and are not affected by changes until they refresh and pick up a new metadata location.

Data files in snapshots are stored in one or more manifest files that contain a row for each data file in the table, its partition data, and its metrics. A snapshot is the union of all files in its manifests. Manifest files can be shared between snapshots to avoid rewriting metadata that is slow-changing.

Design benefits

This design addresses specific problems with the hive layout: file listing is no longer used to plan jobs and files are created in place without renaming.

This also provides improved guarantees and performance:

  • Snapshot isolation: Readers always use a consistent snapshot of the table, without needing to hold a lock. All table updates are atomic.
  • O(1) RPCs to plan: Instead of listing O(n) directories in a table to plan a job, reading a snapshot requires O(1) RPC calls.
  • Distributed planning: File pruning and predicate push-down is distributed to jobs, removing the metastore as a bottleneck.
  • Version history and rollback: Table snapshots are kept as history and tables can roll back if a job produces bad data.
  • Finer granularity partitioning: Distributed planning and O(1) RPC calls remove the current barriers to finer-grained partitioning.
  • Enables safe file-level operations. By supporting atomic changes, Iceberg enables new use cases, like safely compacting small files and safely appending late data to tables.

Why a new table format?

There are several problems with the current format:

  • There is no specification. Implementations don’t handle all cases consistently. For example, bucketing in Hive and Spark use different hash functions and are not compatible. Hive uses a locking scheme to make cross-partition changes safe, but no other implementations use it.
  • The metastore only tracks partitions. Files within partitions are discovered by listing partition paths. Listing partitions to plan a read is expensive, especially when using S3. This also makes atomic changes to a table’s contents impossible. Netflix has developed custom Metastore extensions to swap partition locations, but these are slow because it is expensive to make thousands of updates in a database transaction.
  • Operations depend on file rename. Most output committers depend on rename operations to implement guarantees and reduce the amount of time tables only have partial data from a write. But rename is not a metadata-only operation in S3 and will copy data. The new S3 committers that use multipart upload make this better, but can’t entirely solve the problem and put a lot of load on the S3 index during job commit.

Table data is tracked in both a central metastore, for partitions, and the file system, for files. The central metastore can be a scale bottleneck and the file system doesn't---and shouldn't---provide transactions to isolate concurrent reads and writes. The current table layout cannot be patched to fix its major problems.

Other design goals

In addition to changes in how table contents are tracked, Iceberg's design improves a few other areas:

  • Schema evolution: Columns are tracked by ID to support add/drop/rename.
  • Reliable types: Iceberg uses a core set of types, tested to work consistently across all of the supported data formats.
  • Metrics: The format includes cost-based optimization metrics stored with data files for better job planning.
  • Invisible partitioning: Partitioning is built into Iceberg as table configuration; it can plan efficient queries without extra partition predicates.
  • Unmodified partition data: The Hive layout stores partition data escaped in strings. Iceberg stores partition data without modification.
  • Portable spec: Tables are not tied to Java. Iceberg has a clear specification for other implementations.
You can’t perform that action at this time.