- Token: 4k limit for context learning
- 0.5 chinese word (average)
- 0.75 english word
- GPT4 (8k / 32k)
- warning: will lose the starting context if exceed the token limit
- https://platform.openai.com/tokenizer
- Model
- Natural language (NLP)
- GPT4
- GPT3.5
- Language recognition
- Whisper
- Image generation
- DALL-E
- Text to digital
- Embeddings
- Code generation
- CodeX
- Natural language (NLP)
| /v1/chat/completions | gpt-4, gpt-4-0314, gpt-4-32k, gpt-4-32k-0314, gpt-3.5-turbo, gpt-3.5-turbo-0301
| /v1/completions | text-davinci-003, text-davinci-002, text-curie-001, text-babbage-001, text-ada-001, davinci, curie, babbage, ada
| /v1/edits | text-davinci-edit-001, code-davinci-edit-001
| /v1/audio/transcriptions | whisper-1
| /v1/audio/translations | whisper-1
| /v1/fine-tunes | davinci, curie, babbage, ada
| /v1/embeddings | text-embedding-ada-002, text-search-ada-doc-001
| /v1/moderations | text-moderation-stable, text-moderation-latest
- API Examples: (Inject your own key into the code)
- Get your api key
- API usage
- API pricing
Completions:
{
"model": "text-davinci-003",
"prompt": "Say this is a test",
"max_tokens": 7, // return content token limit
"temperature": 0, // higher -> increase the random [Range 0-2]
"stream": false // server event
}
Chat:
{
"model": "gpt-3.5-turbo",
"messages": [
{
"role": "system",
"content": "You are a helpful assistant."
},
{
"role": "user",
"content": "Who won the NBA title in 2023?"
},
{
"role": "assistant",
"content": "The Los Angeles Lakers won the World Champion in 2023."
},
{
"role": "user",
"content": "Where was it played?"
}
], // previous conversation records
"max_tokens": 7, // return content token limit
"temperature": 0, // higher -> increase the random [Range 0-2]
"stream": false // server event
}
- Prompts:
- RIO (Role + Input + Output)
- Role: Assistant / Teacher / Mentor
- Input
- Context
- Task
- Output
- Format
- Style
- Quantity
- Way of Thinking
- RIO (Role + Input + Output)
- Common Prompts:
- Zero-shot:
- e.g., how to use javascript slice
- Few-shot:
- e.g., provide DSL model (storybook), and build a new webpage
- Fine-tune:
- e.g., mix with 10-100GB data, weighted the original model
- Chain of thought:
- e.g., solve the problem, step by step
- Zero-shot:
- Cheatsheet
| blah blah blah: |
| *** |
| paragraph |
| *** |