Skip to content

This project aims to provide a javascript based, locally run, Gemini model which takes in PDFs as input context and retrieve the context for a user query.

Notifications You must be signed in to change notification settings

Rebooting-Me/Gemini

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

10 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Gemini (beta)

Gemini is a JavaScript project based on its namesake LLM that uses Google's Generative AI to generate content based on input context. It uses the @google/generative-ai package to interact with the Google Generative AI API. The inspiration for this project came from the dearth of any online content on creating a Gemini RAG in javascript. This projects intends to fill that gap!

Recent Changes

Updated main.js

  • Introduced a new function generateContextEmbeddings(texts, fileName) that generates embeddings for a given array of texts using the embedding-001 model from genAI. The generated embeddings are stored in an array of objects, each containing the original text and its corresponding embedding.
  • The parsePdf() function is now used to get the texts that are to be embedded.
  • Two new variables contextFile and storedFile have been added. contextFile is the name of the context file to be embedded and storedFile is the name of the stored context file to ask question from.

Contexts Folder Initialization

  • A new folder named contexts has been initialized. This folder is used to store the context files that are generated by the generateContextEmbeddings function and also to load the storedEmbeddings to provide context to our question.

Setup

  1. Clone the repository.
  2. Install the dependencies with npm install.
  3. Set up your Google Generative AI API key in a .env file in the root of the project:
GEMINI_API_KEY=your_api_key_here

Please replace your_api_key_here with your actual API key.

Usage

  • Upload a PDF file to generate context embeddings in the root directory of the project.
  • Pass the name of the context file to be embedded in the contextFile in main.js.
  • Pass the name of the stored context file to ask question from in the storedFile in main.js.
  • Run the main script in terminal with node main.js.

Functionality

The main.js script performs the following steps:

  1. Configures the generation parameters.
  2. Uses the embedding-001 model to generate embeddings for a set of texts parsed from a PDF file.
  3. Stores the generated embeddings in contexts folder.
  4. Loads the stored embeddings from the contexts folder and embeds the question.
  5. Uses the gemini-pro model to find the best passage that answers the question from the stored embeddings.

Dependencies

  • @google/generative-ai: To interact with the Google Generative AI API.
  • dotenv: To load environment variables from a .env file.
  • fs: To read and write files.
  • mathjs: To perform mathematical operations.
  • pdf-parse: To parse PDF files.

Contributing

Contributions are welcome. Please open an issue or submit a pull request.

License

This project is licensed under the ISC license.

About

This project aims to provide a javascript based, locally run, Gemini model which takes in PDFs as input context and retrieve the context for a user query.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published