You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
There is an issue while running multi-agent commands that require asynchronous use of LMStudio API. For instance, running PYTHONPATH=. python experiments/run_mmlu.py --num-truthful-agents=3 --mode=OptimizedSwarm, the outputs of LMStudio seem to be not assigned to the relevant question but to random input questions.
I have not tested this with Open AI API to see if this is specifically an incompatibility issue of LMStudio.
My current temporary solution is not to use the asynchronous implementation but this is annoyingly very slow. Have you encountered this? Is there a way to still use your asynchronous implementation and not experience this issue?
Thanks so much!
The text was updated successfully, but these errors were encountered:
Hi @mohannahoveyda! Thank you for being so interested in GPTSwarm. Please let me take a look at why the requests get mixed up. However, regardless of the reasons for this bug, you can go ahead with your temporary solution because there is no sense in running the requests in parallel if they all go to your local (presumably, Mac) GPU anyway. Mac's GPU is too slow to run the full-scale experiments that we did.
Hi,
Thanks for this useful framework!
There is an issue while running multi-agent commands that require asynchronous use of LMStudio API. For instance, running
PYTHONPATH=. python experiments/run_mmlu.py --num-truthful-agents=3 --mode=OptimizedSwarm
, the outputs of LMStudio seem to be not assigned to the relevant question but to random input questions.I have not tested this with Open AI API to see if this is specifically an incompatibility issue of LMStudio.
My current temporary solution is not to use the asynchronous implementation but this is annoyingly very slow. Have you encountered this? Is there a way to still use your asynchronous implementation and not experience this issue?
Thanks so much!
The text was updated successfully, but these errors were encountered: