To install the required dependencies, run the following command:
!pip install -q transformers einops accelerate langchain bitsandbytes
This repository contains three versions of the Falcon code, each designed for different use cases. These versions utilize different libraries and models for text generation. Below is a brief description of each version:
File: falcon_local_withoutHuggingFacePipeline.py
This version uses the transformers
library and the tiiuae/falcon-7b-instruct
model. Its a text-generation model which gives answer for the question asked.
File: falcon_local_withHuggingFacePipeline.py
The second version of the Falcon code utilizes the langchain
library and the tiiuae/falcon-7b-instruct
model. It employs the HuggingFacePipeline
class for text generation and generates a response to a given question.
File: falcon_local_HuggingFaceInferenceAPI.py
In this version, the code incorporates the dotenv
, langchain
, and textwrap
libraries. It uses the HuggingFace model hub and the tiiuae/falcon-7b-instruct
model. The code demonstrates a summarization chain for generating responses to questions.
Please refer to the individual code files for more details and instructions on running each version of the Falcon code.
This project is licensed under the Apache License 2.0.