Skip to content

TogetherAIException during evaluate-utility #24

@GQYZ

Description

@GQYZ

Hi, I recently had to rewrite my defense to remove instructions in the llm filter asking it to generate if the model output from the python filter is empty due to question #20.

After the rewrite, evaluate-utility with GPT3.5 works fine {"utility":0.649,"threshold":0.483,"passed":true,"additional_info":{"avg_share_of_failed_queries":0.0}}

However, I ran evaluate-utility with llama twice where the first time had a 0.2 failure rate {"utility":0.459,"threshold":0.398,"passed":false,"additional_info":{"avg_share_of_failed_queries":0.2}}
and the second time had a TogetherAIException
{"detail":["OpenAI API error: TogetherAIException - {\"model\": \"togethercomputer/llama-2-70b-chat\", \"error\": {\"error\": \"Input validation error: `inputs` tokens + `max_new_tokens` must be <= 4097. Given: 4377 `inputs` tokens and 300 `max_new_tokens`\", \"error_type\": \"validation\", \"result_type\": \"language-model-inference\", \"choices\": []}}. If you have a team budget,note that your team budget has NOT been consumed."]}

I have never seen this error prior to today. Is there something I should do to resolve this? The defense I have been encountering these issues with is 65972558b5ba321c2227a5bf

Metadata

Metadata

Labels

No labels
No labels

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions