Description
What happened?
[code]
task = "Who was the Miami Heat player with the highest points in the 2006-2007 season, and what was the percentage change in his total rebounds between the 2007-2008 and 2008-2009 seasons?"
Use asyncio.run(...) if you are running this in a script.
await Console(team.run_stream(task=task))
[error]
File "/root/miniconda3/envs/llama-factory/lib/python3.11/site-packages/openai/_base_client.py", line 1666, in _retry_request
return await self._request(
^^^^^^^^^^^^^^^^^^^^
File "/root/miniconda3/envs/llama-factory/lib/python3.11/site-packages/openai/_base_client.py", line 1634, in _request
raise self._make_status_error_from_response(err.response) from None
openai.RateLimitError: Error code: 429 - {'error': {'message': 'Your account crobsbebi7sb70r8ack0 request reached max request: 3, please try again after 1 seconds', 'type': 'rate_limit_reached_error'}}
---------- Summary ----------
Number of messages: 4
Finish reason: None
Total prompt tokens: 379
Total completion tokens: 114
Duration: 6.39 seconds
What did you expect to happen?
not error
How can we reproduce it (as minimally and precisely as possible)?
use kimi free api
AutoGen version
0.4.11
Which package was this bug in
AgentChat
Model used
kimi
Python version
No response
Operating system
No response
Any additional info you think would be helpful for fixing this bug
No response