Repository to finetune LLaMa2 7 Billion parameter model to generate python code from natural language description of the code.
We have a GPU requirement of 1x NVIDIA A100 40GB GPU, and 32GB of RAM.
To setup the project, follow the steps below:
- Clone the repository
- Install the requirements
pip install -r requirements.txt
- We use python code instruction dataset from here
- Our Base model s a chat model of LLaMa-2 7B by NousRearch
- Run the following command to finetune the model
python finetune.py
- Run the following command to generate code from natural language description
python generate.py
- Some results snippets can be found in the
results
directory. - Some training graphs can be found in the
training_graphs
directory. - The report and slides for the project can be found in the repository.