Skip to content

AbhinavKumar0000/Python-Code-Generator

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

3 Commits
Β 
Β 
Β 
Β 

Repository files navigation

AI-Powered Python Code Generator Welcome to the AI-Powered Python Code Generator repository! This project leverages a fine-tuned Llama-3.1-8B model using Unsloth to generate efficient and accurate Python code for various tasks, such as machine learning algorithms. A highlight of this project is a clean implementation of the Gradient Descent algorithm for linear regression. πŸš€ πŸ“– Overview This repository contains code to fine-tune a Llama-3.1-8B model with 4-bit quantization and LoRA adapters, trained on the Python Code Instructions 18k Alpaca dataset. The fine-tuned model can generate Python code for tasks like creating functions, solving algorithms, and more. The project also includes memory-efficient training scripts and a sample output: a gradient descent function. 🎯 Features

Fine-Tuned Llama-3.1-8B: Optimized with Unsloth for fast, memory-efficient training. Gradient Descent Example: A generated Python function for optimizing linear regression parameters. Hugging Face Integration: Model and tokenizer pushed to Hugging Face Hub for easy access. GPU Memory Tracking: Monitors memory usage during training for efficiency.

πŸš€ Getting Started Prerequisites

Python 3.8+ CUDA-enabled GPU (for training/inference) Hugging Face account and token (for model access and pushing to Hub) Required libraries: unsloth, torch, transformers, datasets, trl

Install dependencies: pip install unsloth torch transformers datasets trl

Setup

Clone the Repository: git clone [your-repo-link] cd [your-repo-name]

Set Environment Variables:Create a .env file or set your Hugging Face token: export HF_TOKEN="your_huggingface_token"

Run the Fine-Tuning Script:Execute the training script to fine-tune the model: python finetune_llama.py

Generate Code:Use the fine-tuned model to generate code, like the gradient descent function: python generate_code.py

πŸ“œ Example: Gradient Descent Function Below is a sample Python function generated by the fine-tuned model for performing gradient descent in linear regression: def gradient_descent(x, y, learning_rate=0.01, iterations=100): """ Performs gradient descent to find optimal parameters for a linear regression model.

Args:
    x: Input features (numpy array)
    y: Target values (numpy array)
    learning_rate: Step size for parameter updates (float)
    iterations: Number of iterations (int)

Returns:
    m: Optimized slope
    b: Optimized intercept
"""
m, b = 0, 0  # Initial parameters
n = len(x)   # Number of data points

for _ in range(iterations):
    # Predicted values
    y_pred = m * x + b
    
    # Compute gradients
    dm = -(2/n) * sum(x * (y - y_pred))
    db = -(2/n) * sum(y - y_pred)
    
    # Update parameters
    m -= learning_rate * dm
    b -= learning_rate * db

return m, b

Check out gradient_descent.py in the repository for the full code! πŸ› οΈ Project Structure

finetune_llama.py: Script for fine-tuning the Llama-3.1-8B model with Unsloth. gradient_descent.py: Generated gradient descent function. outputs/: Directory for training outputs. lora_model/: Saved LoRA adapters and tokenizer.

🀝 Contributing Contributions are welcome! Feel free to open issues or submit pull requests for improvements, bug fixes, or new features.

Fork the repository. Create a new branch (git checkout -b feature/your-feature). Commit your changes (git commit -m "Add your feature"). Push to the branch (git push origin feature/your-feature). Open a pull request.

πŸ“š Resources

Unsloth Documentation Hugging Face Transformers Python Code Instructions Dataset Model on Hugging Face: mervinpraison/Llama-3.1-8B-bnb-4bit-python

πŸ“§ Contact Have questions? Reach out via GitHub Issues or connect with me on X/Twitter! 🌟 Acknowledgments

Thanks to Unsloth for efficient fine-tuning tools. Hugging Face for hosting the model and dataset. The open-source community for inspiring this project!

Happy coding! πŸ’»

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published