Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[SPARK-43421][SS] Implement Changelog based Checkpointing for RocksDB State Store Provider #41099

Closed
wants to merge 42 commits into from
Closed
Show file tree
Hide file tree
Changes from 9 commits
Commits
Show all changes
42 commits
Select commit Hold shift + click to select a range
da54bb5
initial implementation
chaoqin-li1123 May 4, 2023
5bda37f
add conf
chaoqin-li1123 May 4, 2023
8bc8552
remove unused import
chaoqin-li1123 May 4, 2023
be7846b
test correctness with changelog checkpointing
chaoqin-li1123 May 4, 2023
0512806
add unit test and fix bug
chaoqin-li1123 May 8, 2023
974a47e
clean up
chaoqin-li1123 May 8, 2023
2d81dc2
address comment
chaoqin-li1123 May 9, 2023
9df1340
fix build
chaoqin-li1123 May 9, 2023
1b0f94c
respect minDeltasForSnapshot in changelog checkpointing
chaoqin-li1123 May 9, 2023
bf30cf1
fix checkpoint interval bug and address comment
chaoqin-li1123 May 9, 2023
19b8355
address comments
chaoqin-li1123 May 10, 2023
4ff442f
clean up conf for changelog checkpointing
chaoqin-li1123 May 12, 2023
b3cc436
enable streaming aggregation suite to run with rocksdb
chaoqin-li1123 May 12, 2023
0ef2fc9
Merge branch 'master' of github.com:chaoqin-li1123/spark into changelog
chaoqin-li1123 May 12, 2023
e59d43f
clean up
chaoqin-li1123 May 14, 2023
1e46adc
add doc
chaoqin-li1123 May 14, 2023
5f53f49
address comments
chaoqin-li1123 May 15, 2023
5910fb7
address comments
chaoqin-li1123 May 15, 2023
0ee93c1
Merge branch 'master' of github.com:chaoqin-li1123/spark into changelog
chaoqin-li1123 May 18, 2023
7c65cac
add comments
chaoqin-li1123 May 19, 2023
b2ead71
address comments
chaoqin-li1123 May 19, 2023
36d9ae2
address comments
chaoqin-li1123 May 20, 2023
4aa2605
simplify
chaoqin-li1123 May 20, 2023
ff4cff9
add doc and comments
chaoqin-li1123 May 20, 2023
82e7168
add backward compatibility integration test
chaoqin-li1123 May 22, 2023
4109c29
comment out tests
chaoqin-li1123 May 23, 2023
573e0e9
comment out tests
chaoqin-li1123 May 23, 2023
fc8c1bd
comment out tests
chaoqin-li1123 May 23, 2023
6480621
move tests around to pass ci
chaoqin-li1123 May 24, 2023
590f21c
move tests around to pass ci
chaoqin-li1123 May 24, 2023
bb58556
move tests around to pass ci
chaoqin-li1123 May 25, 2023
14d7b91
move tests around to pass ci
chaoqin-li1123 May 25, 2023
99f5e0a
fix nits
chaoqin-li1123 May 25, 2023
b1d3809
improve doc
chaoqin-li1123 May 25, 2023
050f214
fix test nits
chaoqin-li1123 May 26, 2023
f723840
use NextIterator
chaoqin-li1123 May 26, 2023
da7aa99
make rocksdb state store suite use sqlconf in shared spark session
chaoqin-li1123 May 26, 2023
4f9b0a7
address testing comments
chaoqin-li1123 May 31, 2023
7d52ed5
Merge branch 'master' of github.com:chaoqin-li1123/spark into changelog
chaoqin-li1123 May 31, 2023
91d0075
add after each
chaoqin-li1123 May 31, 2023
5732fbd
fix test failure
chaoqin-li1123 May 31, 2023
6cb6d0b
Merge branch 'master' of github.com:chaoqin-li1123/spark into changelog
chaoqin-li1123 May 31, 2023
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -1940,6 +1940,15 @@ object SQLConf {
// 5 is the default table format version for RocksDB 6.20.3.
.createWithDefault(5)

val STATE_STORE_ROCKSDB_CHANGE_CHECKPOINTING_ENABLED =
buildConf("spark.sql.streaming.stateStore.rocksdb.enableChangelogCheckpointing")
chaoqin-li1123 marked this conversation as resolved.
Show resolved Hide resolved
.internal()
.doc("Enable RocksDB state store to checkpoint a version of the state" +
" by uploading the changelog.")
.version("3.4.1")
.booleanConf
.createWithDefault(false)

val STREAMING_AGGREGATION_STATE_FORMAT_VERSION =
buildConf("spark.sql.streaming.aggregation.stateFormatVersion")
.internal()
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -56,6 +56,14 @@ class RocksDB(
hadoopConf: Configuration = new Configuration,
loggingId: String = "") extends Logging {

case class RocksDBCheckpoint(checkpointDir: File, version: Long, numKeys: Long) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is case class preferred pattern for something like this ? cc - @HeartSaVioR

chaoqin-li1123 marked this conversation as resolved.
Show resolved Hide resolved
def close(): Unit = {
silentDeleteRecursively(checkpointDir, s"Free up local checkpoint of snapshot $version")
}
}

@volatile private var latestCheckpoint: Option[RocksDBCheckpoint] = None
chaoqin-li1123 marked this conversation as resolved.
Show resolved Hide resolved

RocksDBLoader.loadLibrary()

// Java wrapper objects linking to native RocksDB objects
Expand Down Expand Up @@ -109,13 +117,15 @@ class RocksDB(
private val nativeStats = dbOptions.statistics()

private val workingDir = createTempDir("workingDir")
private val fileManager = new RocksDBFileManager(
dfsRootDir, createTempDir("fileManager"), hadoopConf, loggingId = loggingId)
private val fileManager = new RocksDBFileManager(dfsRootDir, createTempDir("fileManager"),
hadoopConf, conf.compressionCodec, loggingId = loggingId)
private val byteArrayPair = new ByteArrayPair()
private val commitLatencyMs = new mutable.HashMap[String, Long]()
private val acquireLock = new Object

@volatile private var db: NativeRocksDB = _
@volatile private var changelogWriter: Option[StateStoreChangelogWriter] = None
private val enableChangelogCheckpointing: Boolean = conf.enableChangelogCheckpointing
@volatile private var loadedVersion = -1L // -1 = nothing valid is loaded
@volatile private var numKeysOnLoadedVersion = 0L
@volatile private var numKeysOnWritingVersion = 0L
Expand All @@ -129,14 +139,15 @@ class RocksDB(
* Note that this will copy all the necessary file from DFS to local disk as needed,
* and possibly restart the native RocksDB instance.
*/
def load(version: Long): RocksDB = {
def load(version: Long, readOnly: Boolean = false): RocksDB = {
assert(version >= 0)
acquire()
logInfo(s"Loading $version")
try {
if (loadedVersion != version) {
closeDB()
val metadata = fileManager.loadCheckpointFromDfs(version, workingDir)
val latestSnapshotVersion = fileManager.getLatestSnapshotVersion(version)
chaoqin-li1123 marked this conversation as resolved.
Show resolved Hide resolved
val metadata = fileManager.loadCheckpointFromDfs(latestSnapshotVersion, workingDir)
openDB()

val numKeys = if (!conf.trackTotalNumberOfRows) {
Expand All @@ -150,8 +161,27 @@ class RocksDB(
metadata.numKeys
}
numKeysOnWritingVersion = numKeys
numKeysOnLoadedVersion = numKeys

// Replay change log from the last snapshot to the loaded version.
// This will be noop if changelog checkpointing is disabled.
for (v <- latestSnapshotVersion + 1 to version) {
chaoqin-li1123 marked this conversation as resolved.
Show resolved Hide resolved
var changelogReader: StateStoreChangelogReader = null
try {
changelogReader = fileManager.getChangelogReader(v)
while (changelogReader.hasNext) {
val byteArrayPair = changelogReader.next()
if (byteArrayPair.value != null) {
put(byteArrayPair.key, byteArrayPair.value)
} else {
remove(byteArrayPair.key)
}
}
} finally {
if (changelogReader != null) changelogReader.close()
}
}
// After changelog replay the numKeysOnWritingVersion will be updated to
// the correct number of keys in the loaded version.
numKeysOnLoadedVersion = numKeysOnWritingVersion
loadedVersion = version
fileManagerMetrics = fileManager.latestLoadCheckpointMetrics
}
Expand All @@ -164,6 +194,9 @@ class RocksDB(
loadedVersion = -1 // invalidate loaded data
throw t
}
if (enableChangelogCheckpointing && !readOnly) {
changelogWriter = Some(fileManager.getChangeLogWriter(version + 1))
chaoqin-li1123 marked this conversation as resolved.
Show resolved Hide resolved
}
this
}

Expand All @@ -187,6 +220,7 @@ class RocksDB(
}
}
db.put(writeOptions, key, value)
changelogWriter.foreach(_.put(key, value))
}

/**
Expand All @@ -201,6 +235,7 @@ class RocksDB(
}
}
db.delete(writeOptions, key)
changelogWriter.foreach(_.delete(key))
}

/**
Expand Down Expand Up @@ -286,44 +321,48 @@ class RocksDB(
*/
def commit(): Long = {
val newVersion = loadedVersion + 1
val checkpointDir = createTempDir("checkpoint")
var rocksDBBackgroundThreadPaused = false
try {
// Make sure the directory does not exist. Native RocksDB fails if the directory to
// checkpoint exists.
Utils.deleteRecursively(checkpointDir)

logInfo(s"Flushing updates for $newVersion")
val flushTimeMs = timeTakenMs { db.flush(flushOptions) }

val compactTimeMs = if (conf.compactOnCommit) {
chaoqin-li1123 marked this conversation as resolved.
Show resolved Hide resolved
logInfo("Compacting")
timeTakenMs { db.compactRange() }
} else 0

logInfo("Pausing background work")
val pauseTimeMs = timeTakenMs {
db.pauseBackgroundWork() // To avoid files being changed while committing
chaoqin-li1123 marked this conversation as resolved.
Show resolved Hide resolved
rocksDBBackgroundThreadPaused = true
}

logInfo(s"Creating checkpoint for $newVersion in $checkpointDir")
val checkpointTimeMs = timeTakenMs {
val cp = Checkpoint.create(db)
cp.createCheckpoint(checkpointDir.toString)
var flushTimeMs = 0L
var checkpointTimeMs = 0L
if (shouldCreateSnapshot()) {
flushTimeMs = timeTakenMs { db.flush(flushOptions) }
chaoqin-li1123 marked this conversation as resolved.
Show resolved Hide resolved
checkpointTimeMs = timeTakenMs {
chaoqin-li1123 marked this conversation as resolved.
Show resolved Hide resolved
val checkpointDir = createTempDir("checkpoint")
// Make sure the directory does not exist. Native RocksDB fails if the directory to
// checkpoint exists.
Utils.deleteRecursively(checkpointDir)
val cp = Checkpoint.create(db)
cp.createCheckpoint(checkpointDir.toString)
synchronized {
chaoqin-li1123 marked this conversation as resolved.
Show resolved Hide resolved
latestCheckpoint.foreach(_.close())
chaoqin-li1123 marked this conversation as resolved.
Show resolved Hide resolved
latestCheckpoint = Some(
RocksDBCheckpoint(checkpointDir, newVersion, numKeysOnWritingVersion))
}
}
}

logInfo(s"Syncing checkpoint for $newVersion to DFS")
val fileSyncTimeMs = timeTakenMs {
fileManager.saveCheckpointToDfs(checkpointDir, newVersion, numKeysOnWritingVersion)
if (enableChangelogCheckpointing) {
try {
assert(changelogWriter.isDefined)
changelogWriter.foreach(_.commit())
} finally {
changelogWriter = None
}
} else {
assert(changelogWriter.isEmpty)
uploadSnapshot()
}
}

numKeysOnLoadedVersion = numKeysOnWritingVersion
loadedVersion = newVersion
fileManagerMetrics = fileManager.latestSaveCheckpointMetrics
commitLatencyMs ++= Map(
"flush" -> flushTimeMs,
"compact" -> compactTimeMs,
chaoqin-li1123 marked this conversation as resolved.
Show resolved Hide resolved
"pause" -> pauseTimeMs,
"checkpoint" -> checkpointTimeMs,
"fileSync" -> fileSyncTimeMs
)
Expand All @@ -334,25 +373,59 @@ class RocksDB(
loadedVersion = -1 // invalidate loaded version
throw t
} finally {
if (rocksDBBackgroundThreadPaused) db.continueBackgroundWork()
silentDeleteRecursively(checkpointDir, s"committing $newVersion")
// reset resources as either 1) we already pushed the changes and it has been committed or
// 2) commit has failed and the current version is "invalidated".
release()
}
}

private def shouldCreateSnapshot(): Boolean = {
if (enableChangelogCheckpointing) {
assert(changelogWriter.isDefined)
val newVersion = loadedVersion + 1
newVersion - fileManager.getLastUploadedSnapshotVersion() >= conf.minDeltasForSnapshot ||
chaoqin-li1123 marked this conversation as resolved.
Show resolved Hide resolved
changelogWriter.get.size > 1000
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

1000 entries feel relatively small to trigger a snapshop upload. Consider to remove the limit, or make it very large, such as 1 million.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This does not trigger a snapshot upload, it simply flush and create a local checkpoint. I can increase it to 10K, which guarantee that the changelog replay between every 2 snapshot is < 50k records.

} else true
}

private def uploadSnapshot(): Unit = {
val localCheckpoint = synchronized {
val checkpoint = latestCheckpoint
latestCheckpoint = None
checkpoint
}
localCheckpoint match {
case Some(RocksDBCheckpoint(localDir, version, numKeys)) =>
try {
val uploadTime = timeTakenMs {
fileManager.saveCheckpointToDfs(localDir, version, numKeys)
fileManagerMetrics = fileManager.latestSaveCheckpointMetrics
}
logInfo(s"Upload snapshot of version $version, time taken: $uploadTime ms")
chaoqin-li1123 marked this conversation as resolved.
Show resolved Hide resolved
} finally {
localCheckpoint.foreach(_.close())
}
case _ =>
}
}

/**
* Drop uncommitted changes, and roll back to previous version.
*/
def rollback(): Unit = {
numKeysOnWritingVersion = numKeysOnLoadedVersion
loadedVersion = -1L
changelogWriter.foreach(_.abort())
// Make sure changelogWriter gets recreated next time.
changelogWriter = None
release()
logInfo(s"Rolled back to $loadedVersion")
}

def cleanup(): Unit = {
if (enableChangelogCheckpointing) {
uploadSnapshot()
}
chaoqin-li1123 marked this conversation as resolved.
Show resolved Hide resolved
val cleanupTime = timeTakenMs {
fileManager.deleteOldVersions(conf.minVersionsToRetain)
}
Expand All @@ -369,6 +442,7 @@ class RocksDB(
flushOptions.close()
dbOptions.close()
dbLogger.close()
latestCheckpoint.foreach(_.close())
chaoqin-li1123 marked this conversation as resolved.
Show resolved Hide resolved
silentDeleteRecursively(localRootDir, "closing RocksDB")
} catch {
case e: Exception =>
Expand Down Expand Up @@ -550,7 +624,9 @@ class ByteArrayPair(var key: Array[Byte] = null, var value: Array[Byte] = null)
*/
case class RocksDBConf(
minVersionsToRetain: Int,
minDeltasForSnapshot: Int,
compactOnCommit: Boolean,
enableChangelogCheckpointing: Boolean,
blockSizeKB: Long,
blockCacheSizeMB: Long,
lockAcquireTimeoutMs: Long,
Expand All @@ -563,7 +639,8 @@ case class RocksDBConf(
boundedMemoryUsage: Boolean,
totalMemoryUsageMB: Long,
writeBufferCacheRatio: Double,
highPriorityPoolRatio: Double)
highPriorityPoolRatio: Double,
compressionCodec: String)

object RocksDBConf {
/** Common prefix of all confs in SQLConf that affects RocksDB */
Expand All @@ -585,6 +662,8 @@ object RocksDBConf {

// Configuration that specifies whether to compact the RocksDB data every time data is committed
private val COMPACT_ON_COMMIT_CONF = SQLConfEntry("compactOnCommit", "false")
private val ENABLE_CHANGELOG_CHECKPOINTING_CONF = SQLConfEntry(
"enableChangelogCheckpointing", "false")
chaoqin-li1123 marked this conversation as resolved.
Show resolved Hide resolved
private val BLOCK_SIZE_KB_CONF = SQLConfEntry("blockSizeKB", "4")
private val BLOCK_CACHE_SIZE_MB_CONF = SQLConfEntry("blockCacheSizeMB", "8")
// See SPARK-42794 for details.
Expand Down Expand Up @@ -705,7 +784,9 @@ object RocksDBConf {

RocksDBConf(
storeConf.minVersionsToRetain,
storeConf.minDeltasForSnapshot,
chaoqin-li1123 marked this conversation as resolved.
Show resolved Hide resolved
getBooleanConf(COMPACT_ON_COMMIT_CONF),
getBooleanConf(ENABLE_CHANGELOG_CHECKPOINTING_CONF),
getPositiveLongConf(BLOCK_SIZE_KB_CONF),
getPositiveLongConf(BLOCK_CACHE_SIZE_MB_CONF),
getPositiveLongConf(LOCK_ACQUIRE_TIMEOUT_MS_CONF),
Expand All @@ -718,7 +799,8 @@ object RocksDBConf {
getBooleanConf(BOUNDED_MEMORY_USAGE_CONF),
getLongConf(MAX_MEMORY_USAGE_MB_CONF),
getRatioConf(WRITE_BUFFER_CACHE_RATIO_CONF),
getRatioConf(HIGH_PRIORITY_POOL_RATIO_CONF))
getRatioConf(HIGH_PRIORITY_POOL_RATIO_CONF),
storeConf.compressionCodec)
}

def apply(): RocksDBConf = apply(new StateStoreConf())
Expand Down
Loading