Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We鈥檒l occasionally send you account related emails.

Already on GitHub? Sign in to your account

馃挕Support Local Embeddings #347

Closed
1 task done
MarkSchmidty opened this issue Apr 6, 2023 · 4 comments
Closed
1 task done

馃挕Support Local Embeddings #347

MarkSchmidty opened this issue Apr 6, 2023 · 4 comments
Labels
enhancement New feature or request local llm Related to local llms Stale

Comments

@MarkSchmidty
Copy link

MarkSchmidty commented Apr 6, 2023

Duplicates

  • I have searched the existing issues

Summary 馃挕

Local Models (LLaMA & its finetunes) now work in a fork of Auto-GPT, including with Pinecone Embeddings. See #25 (comment)

Local models and embeddings offer better privacy, lower costs, and enable new uses, like Auto-GPT experiments in private/air-gapped networks. To get these benefits, we should add local (offline) embeddings storage and recall to Auto-GPT.

Examples 馃寛

A version of ooba's text-generation-webui, called wawawario2/long_term_memory, has done this using zarr and Numpy. Check wawawario2/long_term_memory#how-it-works-behind-the-scenes

image

Though the Auto-GPT fork uses ooba's webui API for local models, the long_term_memory project is closely tied to ooba's UI. We mention it only as a reference. We need to create a similar setup in Auto-GPT.

Motivation 馃敠

  1. Better Privacy: Local embeddings keep user data on their devices or networks, avoiding the risk of online data breaches.
  2. Lower Costs: Users save money on cloud storage, bandwidth, and processing power when they use local embeddings.
  3. Offline Use: Auto-GPT can work in offline environments or areas with limited internet, like secure facilities or remote research stations.
  4. Customization: Users can create and manage their own embeddings for better model performance and results.
  5. Faster Response: Local embeddings can speed up response times since data doesn't need to travel to remote servers.

By adding local embeddings storage and recall to Auto-GPT, users get more control, flexibility, and benefits like privacy, cost savings, and accessibility.

@MarkSchmidty
Copy link
Author

This feature is blocking Fully Air-Gapped Offline Auto-GPT

@MarkSchmidty MarkSchmidty changed the title Support Local Embeddings 馃挕Support Local Embeddings Apr 6, 2023
@slavakurilyak
Copy link
Contributor

Related issue #273

@9cento
Copy link

9cento commented Apr 8, 2023

+1

@github-actions
Copy link
Contributor

This issue was closed automatically because it has been stale for 10 days with no activity.

@github-actions github-actions bot closed this as not planned Won't fix, can't repro, duplicate, stale Sep 17, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request local llm Related to local llms Stale
Projects
Status: Done
Development

No branches or pull requests

5 participants