Skip to content

Implementing Retrieval Augmented Generation (RAG) via using GPT-3.5-Turbo as the LLM model and Langchain to simplify the implementation, with the data being feed is in the form of list to make it easier to understand.

Notifications You must be signed in to change notification settings

Gaurav-822/RagImplementation

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 

Repository files navigation

RagImplementation

Implementing Retrieval Augmented Generation (RAG) via using GPT-3.5-Turbo as the LLM model and Langchain to simplify the implementation, with the data being feed is in the form of list to make it easier to understand.

How to Run

  • Open the attached Google Colab File.
  • Set the Open Ai Api key with the name OPENAI_API, Pinecone Api key with the name PINECONE_API_KEY, and Pinecone Env with the name PINECONE_ENV.
  • Run the Colab file to get the results with the RAG benifit over the extra data provided in the list named texts.

About

Implementing Retrieval Augmented Generation (RAG) via using GPT-3.5-Turbo as the LLM model and Langchain to simplify the implementation, with the data being feed is in the form of list to make it easier to understand.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published