-
Notifications
You must be signed in to change notification settings - Fork 710
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ERROR TorrentBroadcast: Store broadcast broadcast_5 fail, remove all pieces of the broadcast #14198
Comments
Hi @alilafzi, Thanks for reporting this. I'm having trouble reproducing this error. Could you share more information about your setup and the specific steps (for example a script) to reproduce this issue? I tried the following things but haven't encountered any issues:
I also tried building the session with SparkSession.builder, |
Since this didn't fail in our test, it could be many reasons regarding your ENV and use case. In this case, we do need the following information, and possibly as much additional information you can provide: Java VersionNo response Java Home DirectoryNo response Setup and installationNo response Operating System and VersionNo response Link to your project (if available)No response Additional InformationNo response |
Thank you for your responses. I am also using SparkSession.builder, spark-submitting the script and a Scala jar to create the Spark session and run the Python script. The script itself is as simple as below: Some setup information are: |
Since you are using Scala, I can offer this started project: https://github.com/maziyarpanahi/spark-nlp-starter?tab=readme-ov-file#spark-submit I am also interested to know what would happen if you do |
I am getting the same error after doing
Also, I am not sure how the starter project you recommended is applicable to my case because I am using a Python script. I appreciate further clarification. Additionally, the Spark version there is 3.1.1 while mine is 3.5.0. |
Hi @alilafzi, I believe I have found the root cause of your issue and how to replicate it. Spark NLP requires the KryoSerializer to be used as the serializer for Spark. When using sparknlp.start() this is automatically set. As you are manually creating a Spark Session, we need to manually set some configs: spark = (
SparkSession.builder.appName("Spark NLP T5")
.master("spark://spark-master:7077") # change to your address
.config("spark.driver.memory", "16G")
.config("spark.serializer", "org.apache.spark.serializer.KryoSerializer")
.config("spark.kryoserializer.buffer.max", "2000M")
.config("spark.driver.maxResultSize", "0")
.config("spark.jars.packages", "com.johnsnowlabs.nlp:spark-nlp_2.12:5.2.2")
.getOrCreate()
) Could you try this and see if it works for you? I have identified, that some of our docs are not consistent (the serializer setting seems to be missing) and will fix this! |
Adding the KryoSerializer to the config completely resolved the issue. Thank you very much for your help. I greatly appreciate your consideration. |
Is there an existing issue for this?
Who can help?
No response
What are you working on?
I am working on the T5 Question Generation model (https://sparknlp.org/2022/07/05/t5_question_generation_small_en_3_0.html).
Current Behavior
When I try to load the model, I get the following error:
Expected Behavior
I am able to load T5 on a system with Spark 3.3.1 and SparkNLP 4.4.2 without any problem, but on my current system with Spark 3.5.0 and SparkNLP 5.2.2, I am facing the above issue.
Steps To Reproduce
from sparknlp.annotator import T5Transformer
T5_qg = T5Transformer.load(model)
Where model is the path to a file I already downloaded from the aforementioned link and am now trying to load from the disk.
Spark NLP version and Apache Spark
sparknlp_ver="5.2.2"
spark_version="3.5.0"
Type of Spark Application
spark-submit
Java Version
No response
Java Home Directory
No response
Setup and installation
No response
Operating System and Version
No response
Link to your project (if available)
No response
Additional Information
No response
The text was updated successfully, but these errors were encountered: