Skip to content

aymenghaffar/Llama2-Code-Assistant

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

llama2 code interprerter icon

Llama2 Code Assistant

This project allows LLM to generate code, execute it, receive feedback, debug, and answer questions based on the whole process. It is designed to be intuitive and versatile, capable of dealing with multiple languages and frameworks.

The purpose and direction of the project

🌟 Key Features

  • 🚀 Code Generation and Execution: Llama2 is capable of generating code, which it then automatically identifies and executes within its generated code blocks.
  • Monitors and retains Python variables that were used in previously executed code blocks.
  • 🌟 At the moment, my focus is on "Data development for GPT-4 code interpretation" and "Enhancing the model using this data". For more details, feliciien@gmail.com

Examples


Llama2 in Action

example1_president_search_with_code

In the GIF, Llama2 is seen in action. A user types in the request: Plot Nvidia 90 days chart. Llama2, an advanced code interpreter fine-tuned on a select dataset, swiftly queries Yahoo Finance. Moments later, it fetches the latest Nvidia stock prices from the past 90 days. Using Matplotlib, Llama2 then generates a clear and detailed stock price chart for Nvidia, showcasing its performance over the given period.

Installation

  1. Clone the repository:
git clone https://github.com/SeungyounShin/Llama2-Code-Interpreter.git
  1. Change directory:
cd Llama2-Code-Interpreter.git
  1. Install the required dependencies:
pip install -r requirements.txt

I see, you want to include the part about setting the LLAMA_CI_PATH environment variable in the setup instructions. Here's how you might write it:

Setup

Set the LLAMA_CI_PATH environment variable: This script requires the LLAMA_CI_PATH environment variable to be set to the directory that contains the relevant code. You can set it to the current directory like this:

export LLAMA_CI_PATH=$(pwd)

Please note that this setting is only valid for the current shell session. If you want to make it permanent, you can add it to your shell's startup file (like .bashrc or .bash_profile).

Run App

To start interacting with Llama2 via the Gradio UI:

python3 chatbot.py --model_path <your-model-path>

Replace <your-model-path> with the path to the model file you want to use. (Usally I recommend you to use chat-type model e.g. meta-llama/Llama-2-13b-chat)

Please let me know if you need help with a specific part of this setup process.

License

Distributed under the MIT License. See LICENSE for more information.

Contact

feliciien@gmail.com


About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 100.0%