-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[SUPPORT] MultiWriter w/ DynamoDB - Unable to acquire lock, lock object null #4456
Comments
@zhedoubushishi : Can you take a look at this issue please? Feel free to create a jira and work on adding more documentation around dynamoDB locks. Or we can also think about writing a blog that covers end to end. |
Can you provide the code you used? And how you create the Dynamodb table? |
@nochimow : a gentle reminder to respond to above question. above commentor is a Hoodie committer who added dynamoDB lock provider. So, he should be able to help in your case. |
Hi there, my code basically reads some avro file into a dataframe then we write this dataframe into a hudi table. oodie.datasource.write.keygenerator.class": "org.apache.hudi.keygen.ComplexKeyGenerator", My dynamodb is a simple table with just the partition_key field as a string. There is any recommendation on how the dynamodb structure have to be? |
@zhedoubushishi : When you get a chance, can you please follow up. |
Sorry for the late reply, if you set |
I am facing same issue, created a dynamo DB table with a single column as per below config where dynamo DB table and partition key is being passed when writing the data.
|
@zhedoubushishi : Can you follow up here please and help unblock the user. |
I couldn't reproduce this issue, this is the config I used:
I didn't create the dynamoDB table in advance. |
Since i had to rollback to Hudi 0.9 due the Redshift Spectrum incompatibility i cannot track this issue anymore. @mainamit can you follow up this issue since you also face the same issue? |
@mainamit : let us know if you were able to get it working. Feel free to close out the github issue. If you are still facing the issue, do ping w/ more details. Wenning should be able to assist you. |
@mainamit : do you have any updates for us. |
@nsivabalan I am also using 0.9 now and done a workaround of loading this data in sequential. If you have a working example of this it would be great, but I cannot test with 0.10 due to constraints at my end. |
@mainamit : only difference I see between yours and what @zhedoubushishi have provided is .option("hoodie.write.lock.dynamodb.endpoint_url", "dynamodb.us-west-2.amazonaws.com"). Can you try to set a right value for endpoint_url and give it a try please. |
thanks! closing this for now. please reach out if you are looking for any more assistance. |
I'm facing this exception regularly but at varing time periods. This is with Hudi v0.11, EMR 6.6 spark 3.2.0 here's a link to details in Hudi slack https://apache-hudi.slack.com/archives/C4P8Y739U/p1658319412980229?thread_ts=1658319412.980229&cid=C4P8Y739U |
Hello @nsivabalan @zhedoubushishi , I am facing same exception[Unable to acquire lock, lock object null] |
Hello,
I'm currently trying the multiwriter feature using the dynamoDB lock.
I followed all the steps documented on the https://hudi.apache.org/docs/concurrency_control/ and also the hoodie.write.lock.dynamodb.billing_mode=PAY_PER_REQUEST config, thanks to @bhasudha advice on slack.
After that i ended with the following error: org.apache.hudi.exception.HoodieLockException: Unable to acquire lock, lock object null
This error happens when trying to write in a existing Hudi table.
Since there is no details on how the DynamoDB table must me created on the documentation, i created one simple DynamoDB with a String field like partition.
On slack, there is also other users with the same problem, also AWS Glue users
Environment Description
Stacktrace
Caused by: org.apache.hudi.exception.HoodieException: Unable to acquire lock, lock object null
at org.apache.hudi.internal.DataSourceInternalWriterHelper.commit(DataSourceInternalWriterHelper.java:86)
at org.apache.hudi.spark3.internal.HoodieDataSourceInternalBatchWrite.commit(HoodieDataSourceInternalBatchWrite.java:93)
at org.apache.spark.sql.execution.datasources.v2.V2TableWriteExec.writeWithV2(WriteToDataSourceV2Exec.scala:371)
... 69 more
Caused by: org.apache.hudi.exception.HoodieLockException: Unable to acquire lock, lock object null
at org.apache.hudi.client.transaction.lock.LockManager.lock(LockManager.java:82)
at org.apache.hudi.client.transaction.TransactionManager.beginTransaction(TransactionManager.java:64)
at org.apache.hudi.client.AbstractHoodieWriteClient.commitStats(AbstractHoodieWriteClient.java:186)
at org.apache.hudi.client.AbstractHoodieWriteClient.commitStats(AbstractHoodieWriteClient.java:171)
at org.apache.hudi.internal.DataSourceInternalWriterHelper.commit(DataSourceInternalWriterHelper.java:83)
... 71 more
The text was updated successfully, but these errors were encountered: