Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add llama2-uncensored LLM #1096

Closed
wants to merge 8 commits into from
Closed

Conversation

ErdemOzgen
Copy link
Contributor

@ErdemOzgen ErdemOzgen commented Dec 3, 2023

  • Added ollama with docker container
  • GPTVulnerabilityReportGenerator : Retrieves an OpenAI API key and sets the model to 'gpt-3.5-turbo'. If the API key is not available, it initializes an Ollama instance for local use.
  • GPTAttackSuggestionGenerator: Similar to the GPTVulnerabilityReportGenerator, it retrieves an OpenAI API key and sets the model to 'gpt-3.5-turbo', or initializes an Ollama instance if the API key is not available.
  • Changed makefile make up and make down sections.
  • After docker container up it will run command to pull llama2-uncensored llm model and serve model.

This PR fixes #1035.

@ErdemOzgen ErdemOzgen changed the title Add llama2-uncensored LLM Add llama2-uncensored LLM fixes #1035 Dec 3, 2023
@AnonymousWP AnonymousWP changed the title Add llama2-uncensored LLM fixes #1035 Add llama2-uncensored LLM Dec 3, 2023
@yogeshojha
Copy link
Owner

Wow, excited to test this out!

@SubGlitch1
Copy link
Contributor

instead of this it would be easier to integrate #1114 (respectfully) as that would give users the chance to host any model for themselves or to connect to a server that does.

@ErdemOzgen
Copy link
Contributor Author

@SubGlitch1, the proposed PR updates the URLs for the LLM model. It's important to note that the responses from ChatGPT and local LLMs may differ slightly, potentially leading to parsing errors in the code. However, modifying the model URL in this PR is feasible.

While people might choose different machines for hosting and serving LLMs, it was a deliberate design choice for my site to require local LLM configurations from sources like environment variables or a config file. Although these aren't included in the initial proof of concept, we plan to integrate environment variables or config files once the first PoC is successfully implemented.

@SubGlitch1
Copy link
Contributor

@ErdemOzgen

you are absolutely right saying that the responses from custom apis might differ. that is why i proposed to add support for text-generation-webui as that projects api has the same scheme as openai. either way the options dont have to rule each other out an integrated llm is also useful. good luck

@psyray
Copy link
Collaborator

psyray commented Dec 8, 2023

I close this one in favor of #1116 because it's based on master branch of @ErdemOzgen fork
#1035 (comment)

@psyray psyray closed this Dec 8, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

feat: GPT4All, Open-source large language models that run locally on your CPU and nearly any GPU
5 participants