Using paraphrase(text2text generation) model from huggingface in Python desktop app
This is using the model tuner007/pegasus_paraphrase.
The model's size is approximately 2.3G.
- PyQt5 >= 5.14
- huggingface_hub
- transformers
- git clone ~
- pip install -r requirements.txt
- python main.py
- see "preview" below
- beam typically refers to the beam search algorithm used in sequence generation tasks such as machine translation or text generation. Beam search is a heuristic search algorithm that explores multiple possible sequences of tokens during generation and keeps track of a fixed number of most promising sequences called the "beam width."is typically refers to the beam search algorithm used in sequence generation tasks such as machine translation or text generation. In layman's term, it helps the model find the most suitable sequence of words.
- return sequences refers to the number of sentences the model will generate.
- context is the context in which you want to perform paraphrasing.
After you click "Submit", model will be downloaded in your PC and this app will perform paraphrasing with that and show you the result in text browser below.