This project explores the generation of blogs using Large Language Models (LLMs). It leverages the Llama 2 release, which introduces a family of pretrained and fine-tuned LLMs with varying parameters (7B, 13B, 70B). These models exhibit significant improvements over the previous Llama 1 models, including training on 40% more tokens, a longer context length of 4k tokens, and the use of grouped-query attention for fast inference of the 70B model.
- Pretrained Models: The project includes pretrained models with 7B parameters.
- Improved Performance: Llama 2 models show enhancements in training data, context length, and fast inference using grouped-query attention.
If you like this, please leave a ⭐! Thank you!
- Python 3.x
- Dependencies listed in
requirements.txt
pip install -r requirements.txt- Clone the repository:
git clone https://github.com/AshrithaB/Blog-Generation-using-LLMs.git
cd Blog-Generation-using-LLMs- Install dependencies:
pip install -r requirements.txt- Run the Streamlit App
streamlit run app.py- Explore the pretrained models and utilize them for blog generation.
Contributions from the open-source community are welcomed. If you'd like to contribute to this project, please follow these steps:
- Fork the repository.
- Create a new branch for your feature or bug fix.
- Make your changes and commit them.
- Push your changes to your fork.
- Create a pull request, explaining the changes you've made.
This project is licensed under the MIT License - see the LICENSE file for details.
- Thanks to the community for continuous support and contributions.
- Llama 2 release authors and contributors.