-
Notifications
You must be signed in to change notification settings - Fork 16
1. Quick start guide
pip install panml
# Import panml
from panml.models import ModelPack
# Import any other modules/packages as required
import numpy as np
import pandas as pd
...
Create model pack to load model from HuggingFace Hub. See model options in library supported models.
lm = ModelPack(model='google/flan-t5-base', source='huggingface')
If setting model processing to run on GPU
lm = ModelPack(model='google/flan-t5-base', source='huggingface', model_args={'gpu': True})
Note: model_args {key: value} inputs can take in additionally any HuggingFace AutoModel arguments that is compatible with the HuggingFace AutoModel... from_pretrained classmethod, where key is the name of the parameter, and value is the parameter value assigned.
Generate output
output = lm.predict('What is the best way to live a healthy lifestyle?')
print(output['text'])
'Eat a balanced diet. '
Selecting other models from HuggingFace Hub:
# Examples
lm = ModelPack(model='gpt2', source='huggingface')
lm = ModelPack(model='gpt2-xl', source='huggingface')
lm = ModelPack(model='EleutherAI/gpt-j-6B', source='huggingface')
lm = ModelPack(model='StabilityAI/stablelm-tuned-alpha-7b', source='huggingface')
lm = ModelPack(model='tiiuae/falcon-7b-instruct', source='huggingface')
...
Create model pack from OpenAI model description and API key.
See model options in library supported models.
Note: API key is associated with your account on OpenAI, which is relatively simple process to setup if not done so already. See OpenAI documentation
lm = ModelPack(model='text-davinci-002', source='openai', api_key=<your_openai_key>)
Generate output
output = lm.predict('What is the best way to live a healthy lifestyle?')
print(output['text'])
'The best way to live a healthy lifestyle is to eat healthy foods, get regular exercise,
and get enough sleep.'
df = pd.DataFrame({'input_prompts': [
'The goal of life is',
'The goal of work is',
'The goal of leisure is',
]})
df['output'] = lm.predict(df['input_prompts'])
print(df['output'].tolist())
[' to live a life of purpose, joy, and fulfillment. To find meaning and purpose in life, it is important to focus on what brings you joy and fulfillment, and to strive to make a positive impact on the world. It is also important to take care of yourself and your relationships, and to be mindful of the choices you make. ',
' The goal of this work is to develop a comprehensive understanding of a particular topic or issue, and to use that understanding to create solutions or strategies that can be implemented to address the issue. ',
' to provide an enjoyable and fulfilling experience that helps to reduce stress, improve physical and mental health, and promote social interaction. Leisure activities can include anything from physical activities such as sports and outdoor recreation, to creative activities such as art and music, to social activities such as attending events or visiting friends. ']
We can use various methods in prompt engineering to further steer and control a LLM. Many of us are already familiar with using ChatGPT, where we are asking or interacting with the LLM using text. Instead of manually doing this over sequences of interactions, we can set certain useful prompts into a pre-defined sequential format, and pass these in to refine or apply a level of control over LLM outputs.
See examples of Prompt Chain Engineering
We can use techniques such as document retrieval augmentation for deploying question and answering workflows on documents. Typically, this requires reading in the documents, processing the document to into the required data format, and doing a similarity-based search across the corpus of your documents. This similarity measure is calculated by comparing the vector representations of the query and the document corpus.
See examples of Document Search and Retrieval
Many foundation models of open source LLMs can be fine tuned on custom sets of data. This process typically involves preparing the training dataset, setting the appropriate training hyperparameters and then execute the training in various experimental runs to monitor progress.
See examples of Fine Tuning of LLM