-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Using LiteLLM? #1
Comments
It's really good! It works flawlessly with my cuda12 RTX3080 in wsl2! I use all the models that support cublas and nvidia, they are great. Thanks! If you want to do anything, go check out AGiXT! I think they could use LiteLLM to a great extent. As-a-matter-of-fact, this repo is only-ever going to be an extension to the logic of that app! I think that artificial generative intelligence (AGI, get it?) is already possible via the 'agent' concept popularized by BabyAGI. My repo is aimed at providing maximum computational throughput for consumers who want to use AGI but don't want to use credit cards and apis, because its really not a good deal as a consumer - 'the more you buy the more you save' as Jensen from Nvidia says. I think that the fairly insurmountable costs of API usage in the for-profit space scares most people away from this 'dumb' artificial generative intelligence. A concept most impactfully described by extolling that, interestingly, human-thought is actually made-up of a significant amount of hallucination, too... That is to say; hallucination of large foundational models is not 'bad' once you consider 'agentic cognition' expressed in the time dimension - a hallucination is simply another hypothesis to test. Couple of things I'm researching with my repos:
I think that 'modeling' synthetic/artificial 'cognition' after what we know about human 'cognition' will aid-us in understanding eachother - ourselves, and our creations. For example; I think that "behavior" is an extremely rich area for AGI (generative) to self-analyze, just like ourselves. Another is the concept of subconscious and conscious thought, and even sleeping (dreaming) and cyclic behavior. Anyways, I hope you check out that repo, it's so, so cool! And nice work on LiteLLM, I'll defiantly tell people about it. |
ChatAll would be another project that you could really make an impact, on! Its a great app but could use LiteLLM very effectively, I think. It's actually one of my favorite ways to use LLM (single query multiple model). I'm just a 2nd year, self-taught video game enjoyer, hehe. My repos are really only ever for education. |
Woah @derp-dev you used LiteLLM locally? What for? |
@krrishdholakia Just LocalAI (api) in a docker container with local networking only, so far, so nothing production ready or particularly interesting, haha. LocalAI (this one, there are two: https://github.com/go-skynet/LocalAI) seems to work well with LiteLLM. I haven't gotten oogabooga textgen-webui api functionality to work too-well (just in-general, I haven't tried with LiteLLM), but that would probably be my next experiment. I'll let you know if I have any quandary, thanks! I hope to upload an 'everything' local-docker/kubernetes api which queries all possible options that a consumer has access to. Scope is extremally limited, so thanks to your app I am now diving into the browserless and headless tech I am going to need (inspiration being ChatALL) to wrap into a rest api for all of the free 'webapp' chat endpoints (+litellm to handle local models api and consumer apis). |
looks like local ai itself is a replacement for openai, how're you using litellm with it? (code snippet would be great) would be an awesome tutorial to put out |
Hi @derp-dev,
Saw you had litellm in your requirements.txt - that's awesome!
I'm the maintainer of LiteLLM. How can I make it better for you?
The text was updated successfully, but these errors were encountered: