Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Slightly improve error handling for external providers #220

Merged
merged 18 commits into from Dec 13, 2023

Conversation

igiloh-pinecone
Copy link
Collaborator

Solution

This is a very initial and partial solution for #183.
We will need to improve error handling much further.

Type of Change

  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds functionality)
  • Breaking change (fix or feature that would cause existing functionality to not work as expected)
  • This change requires a documentation update
  • Infrastructure change (CI configs, etc)
  • Non-code change (docs, etc)
  • None of the above: (explain here)

Catch specific OpenAI errors and re-raise with a clear message
…pecific module

In addition, the `transformers` module is very insisstent, so I added their dedicated mechanism for silencing warnings
Instead, individual RecordEncoders and LLMs would need to raise their own errors
Catch the error in RecordEncoder base class, then format for each inhertor differently
Copy link
Contributor

@miararoy miararoy left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

add system test, minimal, overall i think this is pretty thorough as-is, but I understand the notion of this can be extended a lot

src/canopy/knowledge_base/knowledge_base.py Show resolved Hide resolved
1. Needed to change after code was changed.
2. Added a few more test cases
3. Added more assertions on the naive create()
The CLI shouldn't mention `delete_index()`
Sending just "hello" without any limitations can use many tokens
Copy link
Contributor

@acatav acatav left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

async def _aencode_documents_batch(self,
documents: List[KBDocChunk]
) -> List[KBEncodedDocChunk]:
raise NotImplementedError

async def _aencode_queries_batch(self, queries: List[Query]) -> List[KBQuery]:
raise NotImplementedError

def _format_error(self, err):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe we should resuse this code in the LLM too? I think it's can improve the experience

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, that's the plan. We'll do it as part of the planned BaseLLM refactor anyway

try:
encoder = OpenAIEncoder(model_name, **kwargs)
except OpenAIError as e:
raise RuntimeError(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Probably for the future, but I guess it's reasonable to have our own auth and rate limit errors. For many services we are going to use this issue might occur, even pinecone by itself

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, that's definitely the plan!

src/canopy/llm/openai.py Outdated Show resolved Hide resolved
src/canopy_cli/cli.py Outdated Show resolved Hide resolved
@igiloh-pinecone igiloh-pinecone added this pull request to the merge queue Dec 13, 2023
Merged via the queue into pinecone-io:main with commit bd1c039 Dec 13, 2023
10 checks passed
@igiloh-pinecone igiloh-pinecone deleted the bugfix/openai_errors branch December 13, 2023 15:35
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants