-
Notifications
You must be signed in to change notification settings - Fork 494
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Requests] Docker and API Keys #264
Comments
We could technically export the code to be rerun elsewhere but the issue is that due to the interactive notion of Agents, we would have to do every execution in a sandbox. We are aware it's not perfect and we do have it in the roadmap. Could you point to the specific resources in Llama-index and autogen that could serve as inspiration? We would be happy to explore them, and thanks for pointing these out! |
So in Autogen they have a flag param for the LLM config https://microsoft.github.io/autogen/blog/2024/01/23/Code-execution-in-docker/ I'm stuck on my phone and not getting the same results I did before with source code hits so sorry for vague-ness. |
This would be great to have. |
OpenDevin has an action loop for coding that runs completely Dockerized. Might be worth borrowing some of the methods there! https://github.com/OpenDevin/OpenDevin |
That looks pretty cool. Though would it be viable for a service?
…On Mon, Jun 3, 2024, 11:25 PM Duke Jones ***@***.***> wrote:
OpenDevin has an action loop for coding that runs completely Dockerized.
Might be worth borrowing some of the methods there!
https://github.com/OpenDevin/OpenDevin
—
Reply to this email directly, view it on GitHub
<#264 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/BGIZLDMBJXYL3KQOA6FTTF3ZFVMT7AVCNFSM6AAAAABILPSIO6VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDCNBWG4YDOMRRGU>
.
You are receiving this because you authored the thread.Message ID:
***@***.***>
|
there was some speculation that GPT Code Interpreter was implemented by spinning up hosted docker sandboxes. could do with some kind of kubernetes for bin-packing containers, plug it into a hosted dev environment offering like replit, gitpod, etc. |
Could you provide internal methods to utilize docker, like autogen and other LLM tools that execute code? Figuring out how to wrap LaVague externally doesn't make me confident in helping secure execution of code.
API Keys are currently provided only through environ, which is not the best method for parallel services (where users use their own keys). Please provide a method to input the OpenAI API key as a param, like in Llama-Index itself.
The text was updated successfully, but these errors were encountered: