Skip to content

Add initial code examples #3

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 3 commits into from
Jul 20, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
12 changes: 11 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
@@ -1 +1,11 @@
# codespaces-models
# GitHub Models

Welcome to your shiny new Codespace for interacting with GitHub Models! We've got everything fired up and running for you to explore.

You've got a blank canvas to work on from a git perspective as well. There's a single initial commit with the what you're seeing right now - where you go from here is up to you!

Everything you do here is contained within this one codespace. There is no repository on GitHub yet. If and when you’re ready you can click "Publish Branch" and we’ll create your repository and push up your project. If you were just exploring then and have no further need for this code then you can simply delete your codespace and it's gone forever.

## Getting Started

There are a few basic examples that are ready for you to run. You can find them in the [samples directory](/samples/README.md). Each sample is a standalone script that demonstrates how to interact with GitHub Models.
29 changes: 29 additions & 0 deletions samples/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,29 @@
# GitHub Models Samples

This folder contains samples for interacting with GitHub Models. Each subfolder contains examples for a specific language: JavaScript, Python, and cURL.

## Running a sample

When you're ready to run a sample, you can run it from your terminal. For example, to run a JavaScript sample:

```bash
# samples/js/multi_turn.sh

$ node samples/js/multi_turn.js
```

To run a Python sample:

```bash
# samples/python/multi_turn.py

$ python samples/python/multi_turn.py
```

To run a cURL sample:

```bash
# samples/curl/multi_turn.sh

$ ./samples/curl/multi_turn.sh
```
16 changes: 16 additions & 0 deletions samples/curl/basic.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,16 @@
curl -X POST "https://models.inference.ai.azure.com/chat/completions" \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $GITHUB_TOKEN" \
-d '{
"messages": [
{
"role": "system",
"content": "You are a helpful assistant."
},
{
"role": "user",
"content": "What is the capital of France?"
}
],
"model": "gpt-4o"
}'
24 changes: 24 additions & 0 deletions samples/curl/multi_turn.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,24 @@
curl -X POST "https://models.inference.ai.azure.com/chat/completions" \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $GITHUB_TOKEN" \
-d '{
"messages": [
{
"role": "system",
"content": "You are a helpful assistant."
},
{
"role": "user",
"content": "What is the capital of France?"
},
{
"role": "assistant",
"content": "The capital of France is Paris."
},
{
"role": "user",
"content": "What about Spain?"
}
],
"model": "gpt-4o"
}'
17 changes: 17 additions & 0 deletions samples/curl/streaming.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,17 @@
curl -X POST "https://models.inference.ai.azure.com/chat/completions" \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $GITHUB_TOKEN" \
-d '{
"messages": [
{
"role": "system",
"content": "You are a helpful assistant."
},
{
"role": "user",
"content": "Give me 5 good reasons why I should exercise every day."
}
],
"stream": true,
"model": "gpt-4o"
}'
30 changes: 30 additions & 0 deletions samples/js/basic.js
Original file line number Diff line number Diff line change
@@ -0,0 +1,30 @@
import ModelClient from "@azure-rest/ai-inference";
import { AzureKeyCredential } from "@azure/core-auth";

const token = process.env["GITHUB_TOKEN"];
const endpoint = "https://models.inference.ai.azure.com";
const modelName = "gpt-4o";

export async function main() {

const client = new ModelClient(endpoint, new AzureKeyCredential(token));

const response = await client.path("/chat/completions").post({
body: {
messages: [
{ role:"system", content: "You are a helpful assistant." },
{ role:"user", content: "What is the capital of France?" }
],
model: modelName
}
});

if (response.status !== "200") {
throw response.body.error;
}
console.log(response.body.choices[0].message.content);
}

main().catch((err) => {
console.error("The sample encountered an error:", err);
});
35 changes: 35 additions & 0 deletions samples/js/multi_turn.js
Original file line number Diff line number Diff line change
@@ -0,0 +1,35 @@
import ModelClient from "@azure-rest/ai-inference";
import { AzureKeyCredential } from "@azure/core-auth";

const token = process.env["GITHUB_TOKEN"];
const endpoint = "https://models.inference.ai.azure.com";
const modelName = "gpt-4o";

export async function main() {

const client = new ModelClient(endpoint, new AzureKeyCredential(token));

const response = await client.path("/chat/completions").post({
body: {
messages: [
{ role: "system", content: "You are a helpful assistant." },
{ role: "user", content: "What is the capital of France?" },
{ role: "assistant", content: "The capital of France is Paris." },
{ role: "user", content: "What about Spain?" },
],
model: modelName,
}
});

if (response.status !== "200") {
throw response.body.error;
}

for (const choice of response.body.choices) {
console.log(choice.message.content);
}
}

main().catch((err) => {
console.error("The sample encountered an error:", err);
});
47 changes: 47 additions & 0 deletions samples/js/streaming.js
Original file line number Diff line number Diff line change
@@ -0,0 +1,47 @@
import ModelClient from "@azure-rest/ai-inference";
import { AzureKeyCredential } from "@azure/core-auth";
import { createSseStream } from "@azure/core-sse";

const token = process.env["GITHUB_TOKEN"];
const endpoint = "https://models.inference.ai.azure.com";
const modelName = "gpt-4o";

export async function main() {

const client = new ModelClient(endpoint, new AzureKeyCredential(token));

const response = await client.path("/chat/completions").post({
body: {
messages: [
{ role: "system", content: "You are a helpful assistant." },
{ role: "user", content: "Give me 5 good reasons why I should exercise every day." },
],
model: modelName,
stream: true
}
}).asNodeStream();

const stream = response.body;
if (!stream) {
throw new Error("The response stream is undefined");
}

if (response.status !== "200") {
throw new Error(`Failed to get chat completions: ${response.body.error}`);
}

const sseStream = createSseStream(stream);

for await (const event of sseStream) {
if (event.data === "[DONE]") {
return;
}
for (const choice of (JSON.parse(event.data)).choices) {
process.stdout.write(choice.delta?.content ?? ``);
}
}
}

main().catch((err) => {
console.error("The sample encountered an error:", err);
});
23 changes: 23 additions & 0 deletions samples/python/basic.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,23 @@
import os
from azure.ai.inference import ChatCompletionsClient
from azure.ai.inference.models import SystemMessage, UserMessage
from azure.core.credentials import AzureKeyCredential

endpoint = "https://models.inference.ai.azure.com"
model_name = "gpt-4o"
token = os.environ["GITHUB_TOKEN"]

client = ChatCompletionsClient(
endpoint=endpoint,
credential=AzureKeyCredential(token),
)

response = client.complete(
messages=[
SystemMessage(content="You are a helpful assistant."),
UserMessage(content="What is the capital of France?"),
],
model=model_name,
)

print(response.choices[0].message.content)
24 changes: 24 additions & 0 deletions samples/python/multi_turn.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,24 @@
import os
from azure.ai.inference import ChatCompletionsClient
from azure.ai.inference.models import AssistantMessage, SystemMessage, UserMessage
from azure.core.credentials import AzureKeyCredential

token = os.environ["GITHUB_TOKEN"]
endpoint = "https://models.inference.ai.azure.com"
model_name = "gpt-4o"

client = ChatCompletionsClient(
endpoint=endpoint,
credential=AzureKeyCredential(token),
)

messages = [
SystemMessage(content="You are a helpful assistant."),
UserMessage(content="What is the capital of France?"),
AssistantMessage(content="The capital of France is Paris."),
UserMessage(content="What about Spain?"),
]

response = client.complete(messages=messages, model=model_name)

print(response.choices[0].message.content)
28 changes: 28 additions & 0 deletions samples/python/streaming.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,28 @@
import os
from azure.ai.inference import ChatCompletionsClient
from azure.ai.inference.models import SystemMessage, UserMessage
from azure.core.credentials import AzureKeyCredential

token = os.environ["GITHUB_TOKEN"]
endpoint = "https://models.inference.ai.azure.com"
model_name = "gpt-4o"

client = ChatCompletionsClient(
endpoint=endpoint,
credential=AzureKeyCredential(token),
)

response = client.complete(
stream=True,
messages=[
SystemMessage(content="You are a helpful assistant."),
UserMessage(content="Give me 5 good reasons why I should exercise every day."),
],
model=model_name,
)

for update in response:
if update.choices:
print(update.choices[0].delta.content or "", end="")

client.close()