-
Notifications
You must be signed in to change notification settings - Fork 7
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Merge pull request #24 from microsoft/cassie-docs
update nav pages to include prompty spec, what is prompty and tutorials
- Loading branch information
Showing
9 changed files
with
273 additions
and
2 deletions.
There are no files selected for viewing
Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.
Oops, something went wrong.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,4 +1,4 @@ | ||
--- | ||
title: Contributing | ||
index: 1 | ||
index: 4 | ||
--- |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,226 @@ | ||
--- | ||
title: Prompty File Spec | ||
authors: | ||
- sethjuarez | ||
- wayliums | ||
- cassiebreviu | ||
date: 2024-06-10 | ||
tags: | ||
- prompty-file-spec | ||
- documentation | ||
index: 3 | ||
--- | ||
The Prompty yaml file spec can be found [here](https://github.com/microsoft/prompty/blob/main/Prompty.yaml). Below if you can find a brief description of each section and the attributes within it. | ||
|
||
### Prompty description attributes: | ||
```yaml | ||
name: | ||
type: string | ||
description: Name of the Prompty | ||
description: | ||
type: string | ||
description: Description of the Prompty | ||
version: | ||
type: string | ||
description: Version of the Prompty | ||
authors: | ||
type: array | ||
description: Authors of the Prompty | ||
items: | ||
type: string | ||
tags: | ||
type: array | ||
description: Tags of the Prompty | ||
items: | ||
type: string | ||
``` | ||
### Sample, inputs, outputs and template attributes: | ||
```yaml | ||
sample: | ||
oneOf: | ||
- type: object | ||
description: The sample to be used in the Prompty test execution | ||
additionalProperties: true | ||
- type: string | ||
description: The file to be loaded to be used in the Prompty test execution | ||
|
||
|
||
inputs: | ||
type: object | ||
description: The inputs of the Prompty | ||
|
||
outputs: | ||
type: object | ||
description: The outputs of the Prompty | ||
|
||
template: | ||
type: string | ||
description: The template engine to be used can be specified here. This is optional. | ||
enum: [jinja2] | ||
default: jinja2 | ||
|
||
``` | ||
### Model attributes | ||
|
||
```yaml | ||
model: string | ||
enum: | ||
- chat | ||
- completion | ||
description: The API to use for the Prompty -- this has implications on how the template is processed and how the model is called. | ||
default: chat | ||
|
||
configuration: | ||
oneOf: | ||
- $ref: "#/definitions/azureOpenaiModel" | ||
- $ref: "#/definitions/openaiModel" | ||
- $ref: "#/definitions/maasModel" | ||
|
||
parameters: | ||
$ref: "#/definitions/parameters" | ||
|
||
response: | ||
type: string | ||
description: This determines whether the full (raw) response or just the first response in the choice array is returned. | ||
default: first | ||
enum: | ||
- first | ||
- full | ||
|
||
``` | ||
### Parameters for the model attribute: | ||
```yaml | ||
parameters: | ||
type: object | ||
description: Parameters to be sent to the model | ||
additionalProperties: true | ||
properties: | ||
response_format: | ||
type: object | ||
description: > | ||
An object specifying the format that the model must output. Compatible with | ||
`gpt-4-1106-preview` and `gpt-3.5-turbo-1106`. | ||
Setting to `{ "type": "json_object" }` enables JSON mode, which guarantees the | ||
message the model generates is valid JSON. | ||
seed: | ||
type: integer | ||
description: > | ||
This feature is in Beta. If specified, our system will make a best effort to | ||
sample deterministically, such that repeated requests with the same `seed` and | ||
parameters should return the same result. Determinism is not guaranteed, and you | ||
should refer to the `system_fingerprint` response parameter to monitor changes | ||
in the backend. | ||
max_tokens: | ||
type: integer | ||
description: The maximum number of [tokens](/tokenizer) that can be generated in the chat completion. | ||
|
||
temperature: | ||
type: number | ||
description: What sampling temperature to use, 0 means deterministic. | ||
|
||
tools_choice: | ||
oneOf: | ||
- type: string | ||
- type: object | ||
|
||
description: > | ||
Controls which (if any) function is called by the model. `none` means the model | ||
will not call a function and instead generates a message. `auto` means the model | ||
can pick between generating a message or calling a function. Specifying a | ||
particular function via | ||
`{"type": "function", "function": {"name": "my_function"}}` forces the model to | ||
call that function. | ||
`none` is the default when no functions are present. `auto` is the default if | ||
functions are present. | ||
tools: | ||
type: array | ||
items: | ||
type: object | ||
|
||
frequency_penalty: | ||
type: number | ||
description: What sampling frequency penalty to use. 0 means no penalty. | ||
|
||
presence_penalty: | ||
type: number | ||
description: What sampling presence penalty to use. 0 means no penalty. | ||
|
||
stop: | ||
type: array | ||
items: | ||
type: string | ||
description: > | ||
One or more sequences where the model should stop generating tokens. The model | ||
will stop generating tokens if it generates one of the sequences. If the model | ||
generates a sequence that is a prefix of one of the sequences, it will continue | ||
generating tokens. | ||
top_p: | ||
type: number | ||
description: > | ||
What nucleus sampling probability to use. 1 means no nucleus sampling. 0 means | ||
no tokens are generated. | ||
``` | ||
### Definitions of OpenAI models | ||
|
||
```yaml | ||
|
||
openaiModel: | ||
type: object | ||
description: Model used to generate text | ||
properties: | ||
type: | ||
type: string | ||
description: Type of the model | ||
const: openai | ||
name: | ||
type: string | ||
description: Name of the model | ||
organization: | ||
type: string | ||
description: Name of the organization | ||
additionalProperties: false | ||
``` | ||
|
||
## Definition of Azure OpenAI models | ||
|
||
```yaml | ||
azureOpenaiModel: | ||
type: object | ||
description: Model used to generate text | ||
properties: | ||
type: | ||
type: string | ||
description: Type of the model | ||
const: azure_openai | ||
api_version: | ||
type: string | ||
description: Version of the model | ||
azure_deployment: | ||
type: string | ||
description: Deployment of the model | ||
azure_endpoint: | ||
type: string | ||
description: Endpoint of the model | ||
additionalProperties: false | ||
``` | ||
### Definition of MaaS models | ||
|
||
```yaml | ||
maasModel: | ||
type: object | ||
description: Model used to generate text | ||
properties: | ||
type: | ||
type: string | ||
description: Type of the model | ||
const: azure_serverless | ||
azure_endpoint: | ||
type: string | ||
description: Endpoint of the model | ||
additionalProperties: false | ||
``` |
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,14 @@ | ||
--- | ||
title: Tutorials | ||
authors: | ||
- sethjuarez | ||
- wayliums | ||
- cassiebreviu | ||
date: 2024-06-10 | ||
tags: | ||
- tutorials | ||
- documentation | ||
index: 2 | ||
--- | ||
|
||
TODO |
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,25 @@ | ||
--- | ||
title: What is Prompty? | ||
authors: | ||
- sethjuarez | ||
- wayliums | ||
- cassiebreviu | ||
date: 2024-06-10 | ||
tags: | ||
- what-is-prompty | ||
- documentation | ||
index: 0 | ||
--- | ||
|
||
Prompty is a file format that has a supporting toolchain with VS Code and runtimes to simplify and accelerate your LLM application development. | ||
|
||
## The Prompty File Format | ||
Prompty is a language agnostic prompt asset for creating prompts and engineering the responses. Learn more about the format [here](../prompty-file-spec/page.mdx). | ||
|
||
## The Prompty VS Code Extension | ||
Run prompty files directly in VS Code. Download the [VS Code extension here](https://marketplace.visualstudio.com/items?itemName=ms-toolsai.prompty). | ||
|
||
## The Prompty Runtimes | ||
To execute a Prompty file asset in code, you can use one of the supporting runtimes such as: [Promptyflow (python)](https://microsoft.github.io/promptflow/), [LangChain (python)](https://python.langchain.com/v0.1/docs/get_started/introduction) or [Semantic Kernal (csharp)](https://learn.microsoft.com/semantic-kernel/). You can right click on any Prompty file in VS Code and create the basic code implementation for each runtime. | ||
|
||
|
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.