Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Return better error messages and types for labeling errors #849

Merged
merged 3 commits into from
Jun 12, 2024

Conversation

Abhinav-Naikawadi
Copy link
Contributor

Pull Review Summary

Description

Return better error messages and types for labeling errors by parsing LLM provider errors for appropriate error types when available. Still use default error types as fallback.

Type of change

  • New feature (change which adds functionality)

Tests

Tested locally

@@ -221,12 +226,27 @@ async def _alabel(self, prompts: List[str]) -> RefuelLLMResult:
)
except Exception as e:
logger.exception(f"Unable to generate prediction: {e}")
error_message = str(e)
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added new error parsing logic for OpenAI here. If we want to land these changes, can also add similar logic for other providers.

Copy link
Contributor

@nihit nihit left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm otherwise

Comment on lines 230 to 244
except json.JSONDecodeError as e:
logger.error(
f"Invalid JSON in LLM response: {response.text}, Error: {e}"
)
llm_label = self.NULL_LABEL
error = LabelingError(
error_type=ErrorType.PARSING_ERROR,
error_message=f"Unable to parse invalid JSON in LLM response.",
)
except Exception as e:
logger.error(f"Error parsing LLM response: {response.text}, Error: {e}")
llm_label = self.NULL_LABEL
error = LabelingError(
error_type=ErrorType.PARSING_ERROR, error_message=str(e)
error_type=ErrorType.INVALID_LLM_RESPONSE_ERROR,
error_message=str(e),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

let's simplify this - it's okay to keep the older logic:

  1. if any issue with json parsing, try to extract a valid json substring
  2. if any issue with this, just return an empty reponse and log it as ErrorType.INVALID_LLM_RESPONSE_ERROR

@Abhinav-Naikawadi Abhinav-Naikawadi merged commit ef27371 into main Jun 12, 2024
2 checks passed
@Abhinav-Naikawadi Abhinav-Naikawadi deleted the better_errors branch June 12, 2024 22:02
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants