-
Notifications
You must be signed in to change notification settings - Fork 819
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix: Add the error handling for Langchain transformer #2137
Conversation
Hey @sherylZhaoCode 👋! We use semantic commit messages to streamline the release process. Examples of commit messages with semantic prefixes:
To test your commit locally, please follow our guild on building from source. |
/azp run |
Azure Pipelines successfully started running 1 pipeline(s). |
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## master #2137 +/- ##
==========================================
- Coverage 86.26% 86.12% -0.15%
==========================================
Files 312 312
Lines 16477 16483 +6
Branches 1460 1462 +2
==========================================
- Hits 14214 14196 -18
- Misses 2263 2287 +24 ☔ View full report in Codecov by Sentry. |
/azp run |
Azure Pipelines successfully started running 1 pipeline(s). |
/azp run |
Azure Pipelines successfully started running 1 pipeline(s). |
result = error_messages[type(e)].format(e) | ||
|
||
return result |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
to retain parity with the other cognitive services can we make a separate error column and puts errors there if they exist? this requires a new setter, setErrorCol
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also later we'll want to automatically retry rate limit errors
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sure. So in this case, what do you think I should put in the response column? Something like "error"? And for the valid cases, what should be in the error column? Just empty?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In the latest commit, I added an extra column with default name 'errorCol'.
The errorColumn will be empty in the case where everything is valid.
The response column will also be empty in the case of error.
Let me know if any of the above is undesirable behavior.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Perfect!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
One comment about where to put these errors so people can easily see them in their analysis and choose to filter them with spark sql
/azp run |
Azure Pipelines successfully started running 1 pipeline(s). |
input_col_values = [row.technology for row in transformed_df.collect()] | ||
output_col_values = [row.copied_technology for row in transformed_df.collect()] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: this and other tests will run twice as fast if you collect once and then pull out the info from that collected df. Collect will retrigger whole comp unless you have the output cached
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
modified this according to the suggestion
# construct langchain transformer using the chain defined above. And test if the generated | ||
# column has the expected result. | ||
dataframes_to_test = spark.createDataFrame( | ||
[(0, "people on disability don't deserve the money")] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We need to put a disclaimer in the comments that this is only for triggering an error and is not our views
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
added a disclaimer
/azp run |
Azure Pipelines successfully started running 1 pipeline(s). |
return dataset.withColumn(outCol, udfFunction(inCol)) | ||
|
||
return ( | ||
dataset.withColumn("result", udfFunction(inCol)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
sorry i just noticed this, name the column result_{uid} so that this doesent break if someone already has a column called result
/azp run |
Azure Pipelines successfully started running 1 pipeline(s). |
/azp run |
Azure Pipelines successfully started running 1 pipeline(s). |
Related Issues/PRs
#xxx
What changes are proposed in this pull request?
_This PR was to do error handling for the openai request, so that failure of a single row won’t result in failure of the whole spark job. _
How is this patch tested?
Does this PR change any dependencies?
Does this PR add a new feature? If so, have you added samples on website?
website/docs/documentation
folder.Make sure you choose the correct class
estimators/transformers
and namespace.DocTable
points to correct API link.yarn run start
to make sure the website renders correctly.<!--pytest-codeblocks:cont-->
before each python code blocks to enable auto-tests for python samples.WebsiteSamplesTests
job pass in the pipeline.