-
Notifications
You must be signed in to change notification settings - Fork 4.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Reduced dependency size #654
Comments
I super agree with this. I imagine something like `pip install open-interpreter[core]` which wouldn't even include the terminal interface. Would that work?Is openai a heavy dependency? I imagined it would still include that so you could just enter your API key and have it work as a python generator, but we could do it without it and make you bring your own LLM.On Oct 18, 2023, at 10:31 AM, Alita Moore ***@***.***> wrote:
Is your feature request related to a problem? Please describe.
I want to host this on a serverless function, but the dependency are too large. It seems that the biggest offenders are torch and semgrep. But are these actually required for this package to operate?
Describe the solution you'd like
I would like a much smaller package size. Something that focuses on brevity and isolates the core functionality. Utilizing a lightweight proxy for llm requests. This is to avoid dependency on the heavy libraries like openai.
Describe alternatives you've considered
utilizing serverless-python and unzipping. But this is extremely slow and has other problems like sqlite incompatiblity with chroma.
Additional context
No response
—Reply to this email directly, view it on GitHub, or unsubscribe.You are receiving this because you are subscribed to this thread.Message ID: ***@***.***>
|
well the dependency weight depends on what packages those packages install and so on. So I'm not sure exactly which packages depend on torch, but whichever do would be what's causing problems. Also making the science packages optional (pandas and numpy are installed) would be interesting. Note that I think #655 is a better solution. Perhaps this package should be focused on handling the response from the llm / integrating with multiple llms and then offloading the execution to another project that runs it securely. |
See my reply in #655.On Oct 18, 2023, at 11:36 AM, Alita Moore ***@***.***> wrote:
well the dependency weight depends on what packages those packages install and so on. So I'm not sure exactly which packages depend on torch, but whichever do would be what's causing problems. Also making the science packages optional (pandas and numpy are installed) would be interesting. Note that I think #655 is a better solution. Perhaps this package should be focused on handling the response from the llm / integrating with multiple llms and then offloading the execution to another project that runs it securely.
—Reply to this email directly, view it on GitHub, or unsubscribe.You are receiving this because you commented.Message ID: ***@***.***>
|
Closing this stale issue. Please create a new issue if the problem is not resolved or explained in the documentation. Thanks! |
Is your feature request related to a problem? Please describe.
I want to host this on a serverless function, but the dependency are too large. It seems that the biggest offenders are torch and semgrep. But are these actually required for this package to operate?
Describe the solution you'd like
I would like a much smaller package size. Something that focuses on brevity and isolates the core functionality. Utilizing a lightweight proxy for llm requests. This is to avoid dependency on the heavy libraries like openai.
Describe alternatives you've considered
utilizing serverless-python and unzipping. But this is extremely slow and has other problems like sqlite incompatiblity with chroma.
Additional context
No response
The text was updated successfully, but these errors were encountered: