Skip to content

Add check for model's max token length #25

@heyodai

Description

@heyodai

There's a Python library to get the token count. Returning a warning is better than an error.

Example:

(base) odai:magic-commit/ (24-add-readmemd-description-on-pypi✗) $ magic-commit                                                                                                                                                                                           [16:23:54]
error_code=context_length_exceeded error_message="This model's maximum context length is 4097 tokens. However, your messages resulted in 5925 tokens. Please reduce the length of the messages." error_param=messages error_type=invalid_request_error message='OpenAI API error received' stream_error=False
Traceback (most recent call last):
  File "/opt/homebrew/Caskroom/miniconda/base/bin/magic-commit", line 33, in <module>
    sys.exit(load_entry_point('magic-commit', 'console_scripts', 'magic-commit')())
  File "/Users/odai/magic-commit/magic_commit/__main__.py", line 54, in main
    results = run_magic_commit(directory=directory, api_key=key, model=model)
  File "/Users/odai/magic-commit/magic_commit/magic_commit.py", line 201, in run_magic_commit
    commit_message = generate_commit_message(diff, api_key, model)
  File "/Users/odai/magic-commit/magic_commit/magic_commit.py", line 150, in generate_commit_message
    response = openai.ChatCompletion.create(model=model, messages=messages)
  File "/opt/homebrew/Caskroom/miniconda/base/lib/python3.9/site-packages/openai/api_resources/chat_completion.py", line 25, in create
    return super().create(*args, **kwargs)
  File "/opt/homebrew/Caskroom/miniconda/base/lib/python3.9/site-packages/openai/api_resources/abstract/engine_api_resource.py", line 153, in create
    response, _, api_key = requestor.request(
  File "/opt/homebrew/Caskroom/miniconda/base/lib/python3.9/site-packages/openai/api_requestor.py", line 298, in request
    resp, got_stream = self._interpret_response(result, stream)
  File "/opt/homebrew/Caskroom/miniconda/base/lib/python3.9/site-packages/openai/api_requestor.py", line 700, in _interpret_response
    self._interpret_response_line(
  File "/opt/homebrew/Caskroom/miniconda/base/lib/python3.9/site-packages/openai/api_requestor.py", line 765, in _interpret_response_line
    raise self.handle_error_response(
openai.error.InvalidRequestError: This model's maximum context length is 4097 tokens. However, your messages resulted in 5925 tokens. Please reduce the length of the messages.

Metadata

Metadata

Assignees

No one assigned

    Labels

    Projects

    No projects

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions