Delta Lake is a storage layer that brings scalable, ACID transactions to Apache Spark and other big-data engines.
See the Delta Lake Documentation for details.
See the Quick Start Guide to get started with Scala, Java and Python.
Delta Lake is published to Maven Central Repository and can be used by adding a dependency in your POM file.
<dependency> <groupId>io.delta</groupId> <artifactId>delta-core_2.11</artifactId> <version>0.1.0</version> </dependency>
Compatibility with Apache Spark Versions
Delta Lake currently requires Apache Spark 2.4.2. Earlier versions are missing SPARK-27453, which breaks the
partitionBy clause of the
The only stable, public APIs currently provided by Delta Lake are through the
df.writeStream). Options to these APIs will remain stable within a major release of Delta Lake (e.g. 1.x.x).
All other interfaces in the this library are considered internal, and are subject to change across minor / patch releases.
Data Storage Compatibility
Delta Lake guarantees backward compatibility for all Delta Lake tables (i.e. newer versions of Delta Lake will always be able to read tables written by older versions of Delta Lake). However, we reserve the right to break forwards compatibility as new features are introduced to the transaction protocol (i.e. an older version of Delta Lake may not be able to read a table produced by a newer version).
Breaking changes in the protocol are indicated by incrementing the minimum reader/writer version in the
Delta Lake Core is compiled using SBT.
To compile, run
To generate artifacts, run
To execute tests, run
Refer to SBT docs for more commands.
Delta Lake works by storing a transaction log along side the data files in a table. Entries in the log, called delta files, are stored as atomic collections of actions in the
_delta_log directory, at the root of a table. Entries in the log encoded using JSON and are named as zero-padded contiguous integers.
/table/_delta_log/00000000000000000000.json /table/_delta_log/00000000000000000001.json /table/_delta_log/00000000000000000002.json
To avoid needing to read the entire transaction log every time a table is loaded, Delta Lake also occasionally creates a checkpoint, which contains the entire state of the table at the given version. Checkpoints are encoded using Parquet and must only be written after the accompanying Delta Lake files have been written.
Requirements for Underlying Storage Systems
Delta Lake ACID guarantees are predicated on the atomicity and durability guarantees of the storage system. Specifically, we require the storage system to provide the following.
- Atomic visibility: There must a way for a file to visible in its entirely or not visible at all.
- Mutual exclusion: Only one writer must be able to create (or rename) a file at the final destination.
- Consistent listing: Once a file has been written in a directory, all future listings for that directory must return that file.
Currently, only HDFS supports all these guarantees out of the box. We are looking to provide all these guarantees with other storage systems by plugging in custom implementations of LogStore API. If you are interested in adding the above guarantees for your storage systems, you can start discussions in the community mailing group.
As an optimization, storage systems can also allow partial listing of a directory, given a start marker. Delta Lake can use this ability to efficiently discover the latest version of a table, without listing all of the files in the transaction log.
We welcome contributions to Delta Lake. We use GitHub Pull Requests for accepting changes. You will be prompted to sign a contributor license agreement before your change can be accepted.
There are two mediums of communication within the Delta Lake community.