Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Snap 2828] Serialize the write ops on Column Table #1362

Merged
merged 18 commits into from
Jul 27, 2019
Merged

Conversation

suranjan
Copy link
Contributor

@suranjan suranjan commented Jul 19, 2019

Changes proposed in this pull request

Take Write lock on table in case of Insert, PutInto, Update and Delete.
Release them on completion of operation. This is to serialize the write operations to avoid inconsistency.
Handles the smart connector case where we use procedure to take and release lock for a write operation on the table.

Patch testing

precheckin and hydra tests, manual tests.

ReleaseNotes.txt changes

(Does this change require an entry in ReleaseNotes.txt? If yes, has it been added to it?)

Other PRs

TIBCOSoftware/snappy-store#488

(Does this change require changes in other projects- store, spark, spark-jobserver, aqp? Add the links of PR of the other subprojects that are related to this change)

@suranjan suranjan changed the title Snap 2828 [Snap 2828] Serialize the write ops on Column Table Jul 22, 2019
@suranjan suranjan requested review from sumwale and kneeraj July 22, 2019 05:41
Copy link

@kneeraj kneeraj left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Minor comment and clarifications. Will approve after I get the response.

@@ -370,6 +370,7 @@ trait SplitClusterDUnitTestObject extends Logging {
// val connectionURL = "jdbc:snappydata://localhost:" + locatorClientPort + "/"
val connectionURL = s"localhost:$locatorClientPort"
logInfo(s"Starting spark job using spark://$hostName:7077, connectionURL=$connectionURL")

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Inadvertent change looks like. Please revert if not intended.

logDebug(s" Going to take lock on server for table ${table}," +
s" current Thread ${Thread.currentThread().getId}")
val ps = conn.prepareCall(s"VALUES sys.ACQUIRE_REGION_LOCK(?)")
ps.setString(1, "BULKWRITE_" + table)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You want to make this some constant?

JdbcExtendedUtils.getTableWithSchema(table, conn = null, Some(sqlContext.sparkSession))
val lock = grabLock(table, schemaName, defaultConnectionProps)

if (lock.isInstanceOf[RegionLock]) lock.asInstanceOf[RegionLock].lock()
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For uniformity I think you can move the lock acquisition inside grabLock method itself even for embedded mode. Cosmetic change, not insisting.

Conflicts:
	core/src/main/scala/org/apache/spark/sql/SnappySession.scala
	store
Copy link

@kneeraj kneeraj left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suranjan explained that while loop won't hang as either results will come as true or exception will come and in either case loop will end.

@suranjan suranjan merged commit f54ad3e into master Jul 27, 2019
@sumwale sumwale deleted the SNAP-2828 branch July 27, 2019 09:56
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants