Skip to content

Using Facebook's Llama to build myself a versatile set of AI-powered tools.

License

Notifications You must be signed in to change notification settings

SushritPasupuleti/Jarvis

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

33 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Jarvis

Using Facebook's Llama to build myself a versatile set of AI-powered tools.

This repo also serves as a testbed for various frameworks I want to experiment with.

The following features have been implemented:

  • Simple ChatGPT style chatbot

  • Question Answering from given data sources (Web links, PDFs, etc.)

  • Summarization of given text

  • gRPC server to allow calls from a better web framework like Axum, Chi, etc.

  • Code generation like GitHub Copilot (with IDE integration for Vim, VSCode, etc.)

The following features are planned:

  • A service that provides Multi-User access???

Table of contents

  1. Jarvis
    1. Notes
    2. Stack
    3. Setup
      1. Nix Setup
    4. Usage
      1. Running as server
      2. Running as CLI
      3. Running the Web App
    5. Development
      1. Running Codegen

Notes

Notes.md contains my notes on the project.

Stack

Setup

Ensure you have the following installed:

Run the following commands:

Activate virtual environment

source venv/bin/activate

Note

Depending on your shell, use the corresponding activate script in the venv/bin directory. (e.g. venv/bin/activate.fish for fish shell)

Provide permissions to shell scripts

chmod +x model/run.sh

Nix Setup

On nix systems, you can use the shell.nix file to setup the environment. This resolves certain issues with native dependencies and provides the necessary tooling.

nix-shell

Install dependencies

pip install -r requirements.txt

Usage

Activate virtual environment

source venv/bin/activate

Running as server

cd model
uvicorn server:app --reload

Visit http://localhost:8000/docs to view the API documentation

Running as CLI

cd cli
go run .

Warning

This is still a work in progress. The CLI is fully blocking and will not return until the process is complete. It is advised to let the result appear before trying again, as too many calls will crash your computer. This is not an issue with the server.

Running the Web App

cd web
trunk serve

Visit http://localhost:8080 to view the web app

Development

Generating code from .proto files

python -m grpc_tools.protoc -I. --python_out=. --grpc_python_out=. ./model.proto

Running Codegen

Edit your copilot extension's settings to include the following properties:

For NeoVim, use Copilot.lua:

require("copilot").setup {
  server_opts_overrides = {
    trace = "verbose",
    DebugOverrideProxyUrl = {
        advanced = "http://localhost:8000"
    },
    DebugTestOverrideProxyUrl = {
        advanced = "http://localhost:8000"
    },
    DebugOverrideEngine = {
        advanced = "codegen"
    }
  }
}

For VSCode, this goes into the settings.json file:

"github.copilot.advanced": {
    "debug.overrideEngine": "codegen",
    "debug.testOverrideProxyUrl": "http://localhost:8000",
    "debug.overrideProxyUrl": "http://localhost:8000",
}
cd model
uvicorn local-pilot-server:app --reload