Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update package settings in pyproject.toml and LICENSE #1

Merged
merged 5 commits into from
Jul 25, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
28 changes: 28 additions & 0 deletions .github/pull_request_template.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,28 @@
# PR Description

Fixes # (issue number)

## Summary

Provide a concise and clear summary of the changes and their relation to the reported issue.

## Motivation and Context

Explain the reasoning behind the changes and provide relevant context for a better understanding of the modifications.

## Dependencies

Enumerate any new dependencies necessary for implementing these changes.

## Type of Change

Specify the appropriate type of change and remove irrelevant options.

- Bug fix
- New feature
- Breaking change
- Documentation update

## Testing Procedure

Please describe the testing process undertaken to validate these changes.
2 changes: 0 additions & 2 deletions LICENSE
Original file line number Diff line number Diff line change
@@ -1,5 +1,3 @@
Copyright (c) 2022-present, FriendliAI Inc. All rights reserved.

Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
Expand Down
2 changes: 1 addition & 1 deletion docs/docs/cli/intro.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
sidebar_position: 1
---

# 🤗 Welcome to PeriFlow
# 🚀 Welcome to PeriFlow

PeriFlow is the fastest engine for serving generative AI models such as GPT-3.
With PeriFlow, a company can significantly reduce the cost and environmental impact of running its generative AI models.
Expand Down
151 changes: 151 additions & 0 deletions docs/docs/tutorials/intro.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,151 @@
---
sidebar_position: 1
---

# How to Use Your Checkpoint?

## Introduction

PeriFlow allows you to easily upload your own AI model checkpoint from your local machine to the desired PeriFlow project.
This guide will walk you through the steps to upload the checkpoint using the PeriFlow Client.

## Prerequisites

Before you begin, make sure you have the PeriFlow Client installed.
You can install it with the following command, which also includes the necessary machine learning library (`mllib`) dependencies:

```sh
pip install periflow-client[mllib]
```

:::info
### Converting Hugging Face Checkpoint (Optional)

If you have a Hugging Face checkpoint and want to convert it to a PeriFlow-compatible format, you need to install the package with the following command:

```sh
pip install periflow-client[mllib]
```

If you don't need to convert the checkpoint, you can install the package without the `mllib` dependencies:

```sh
pip install periflow-client
```

Note that the PeriFlow Client package requires Python version >= 3.8.
:::

## Step 1. Signing in with PeriFlow

To use the PeriFlow CLI and upload your AI model checkpoint, you need to sign in with your PeriFlow account. Follow these steps:

1. Open a terminal shell.
2. Enter the following command:

```sh
pf login
```

3. When prompted, provide your PeriFlow account credentials, including your ID and password. After successful login, you will see the following output:

```txt
Enter Username: <YOUR_ID>
Enter Password:


Login success!
Welcome back to...
_____ _ _____ _
| __ \___ _ __(_)| ___| | _____ __
| ___/ _ \ '__| || |__ | |/ _ \ \ /\ / /
| | | __/ | | || __|| | (_) | V V /
|_| \___|_| |_||_| |_|\___/ \_/\_/

```

## Step 2. Switching to Your Project

Now that you are logged in, you can switch to the PeriFlow project where you want to upload the checkpoint. Follow these steps:

1. Open a terminal shell (if not already open).
2. Use the following command to switch to your desired project, replacing `$YOUR_PROJECT_NAME` with the actual project name:

```sh
pf project switch $YOUR_PROJECT_NAME
```

:::info
If you are not sure about the available projects, you can list them using the following command:

```sh
pf project list
```
:::

## Step 3. Converting Hugging Face Checkpoint (Optional)

If you want to serve your AI model with PeriFlow, you'll need to convert it to the PeriFlow-compatible format.
Here's how you can easily convert a Hugging Face model checkpoint to the PeriFlow format:

1. Open a terminal shell (if not already open).
2. Use the following command to convert the Hugging Face model checkpoint to the PeriFlow-compatible format. Replace `$MODEL_NAME_OR_PATH`, `$OUTPUT_DIR`, and `$DATA_TYPE` with the actual values you want.

```sh
pf checkpoint convert \
--model-name-or-path $MODEL_NAME_OR_PATH \
--output-dir $OUTPUT_DIR \
--data-type $DATA_TYPE
```

:::info
You have two options for `$MODEL_NAME_OR_PATH`:

- Enter the local checkpoint path.
- Use the Hugging Face model name (e.g., `gpt2`, `EleutherAI/gpt-j-6b`).

For the `$DATA_TYPE` parameter, you can choose from three options:

- `fp16`: 16-bit floating-point format.
- `fp32`: 32-bit floating-point format.
- `bf16`: bfloat16 format.
:::

3. After executing the above command, the following files will be created at $OUTPUT_DIR:
- `model.h5`: The converted model checkpoint file with HDF5 format.
- `tokenizer.json`: Tokenizer file used for tokenizing/detokenizing the model inputs/ouputs.
- `attr.yaml`: The checkpoint attributes file containing the model configurations and generation configurations.

:::caution
PeriFlow CLI will try to configure `tokenizer.json` and `attr.yaml` automatically, but the automatic configuration might fail in the following cases:

- If the model does support the [**Fast tokenizer**](https://huggingface.co/learn/nlp-course/chapter6/3), which is compatible with PeriFlow, `tokenizer.json` will not be created.
- If the model config or generation config published to Hugging Face does not contain the required information, the value will be left blank (marked with "FILL ME") in `attr.yaml`.
- If there is a conflict between the model config and the generation config published to the Hugging Face, the value will be left blank (marked with "FILL ME") in `attr.yaml`.
:::

After completing this step, your Hugging Face model checkpoint will be converted to the PeriFlow-compatible format, and you'll be able to use it with PeriFlow seamlessly.

## Step 4. Uploading Checkpoint

Once your checkpoint is prepared, you can upload it to PeriFlow using the following steps:

1. Open a terminal shell (if not already open).
2. Use the following command to upload the model checkpoint file (`model.h5`) and the tokenizer file (`tokenizer.json`).

```sh
pf checkpoint upload \
--name $CHECKPOINT_NAME \
--source-path $SOURCE_PATH \
--attr-file $ATTR_FILE_PATH
```

:::info
Replace the placeholders with the appropriate values:

- `$CHECKPOINT_NAME`: Enter the desired name for your checkpoint.
- `$SOURCE_PATH`: Provide the path to the directory containing `model.h5` and `tokenizer.json`.
- `$ATTR_FILE_PATH`: Specify the path to `attr.yaml`, which contains the model configurations and generation configurations.
:::

After following these steps, your model checkpoint will be successfully uploaded to your PeriFlow project, ready to be used for AI model serving.
21 changes: 21 additions & 0 deletions docs/docusaurus.config.js
Original file line number Diff line number Diff line change
Expand Up @@ -41,6 +41,9 @@ const SECTIONS = [
defineSection('sdk', {
label: latestVersions['sdk'],
}),
defineSection('tutorials', {
label: latestVersions['tutorials'],
}),
];

/** @type {import('@docusaurus/types').Config} */
Expand Down Expand Up @@ -116,11 +119,21 @@ const config = {
to: 'sdk/intro',
position: 'left',
},
{
label: 'Tutorials',
to: 'tutorials/intro',
position: 'left',
},
{
href: 'https://github.com/friendliai/periflow-client/',
label: 'GitHub',
position: 'right',
},
{
label: 'Quick Start',
href: 'https://periflow.ai/quick-start/',
position: 'left',
},
],
},
footer: {
Expand All @@ -137,6 +150,14 @@ const config = {
label: 'Python SDK',
to: 'sdk/intro',
},
{
label: 'Tutorials',
to: 'tutorials/intro',
},
{
label: 'Quick Start',
href: 'https://periflow.ai/quick-start/'
}
],
},
{
Expand Down
2 changes: 1 addition & 1 deletion periflow/cli/main.py
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@
from periflow.utils.version import get_installed_version

app = typer.Typer(
help="Welcome to PeriFlow 🤗",
help="Supercharge Generative AI Serving 🚀",
no_args_is_help=True,
context_settings={"help_option_names": ["-h", "--help"]},
add_completion=False,
Expand Down
14 changes: 11 additions & 3 deletions pyproject.toml
Original file line number Diff line number Diff line change
@@ -1,9 +1,17 @@
[tool.poetry]
name = "periflow-client"
version = "0.1.0"
description = "PeriFlow is a reliable, speedy, and efficient service for training and serving your own Generative AI models on any data of your choice."
version = "0.1.1"
description = "Client of PeriFlow, the fastest generative AI serving available."
license = "Apache-2.0"
authors = ["PeriFlow teams <eng@friendli.ai>"]
packages = [{include = "periflow"}]
packages = [
{ include = "periflow" },
]
readme = "README.md"
homepage = "https://docs.periflow.ai/"
repository = "https://github.com/friendliai/periflow-client"
documentation = "https://docs.periflow.ai/"
keywords = ["generative-ai", "serving", "llm"]

[tool.poetry.scripts]
pf = "periflow.cli:app"
Expand Down