Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add _record function forLangtailPrompts and implicit completion flow #18

Closed
wants to merge 9 commits into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions .github/workflows/main.yml
Original file line number Diff line number Diff line change
Expand Up @@ -50,6 +50,7 @@ jobs:
- name: run tests
env:
LANGTAIL_API_KEY: ${{secrets.LANGTAIL_API_KEY}}
OPENAI_API_KEY: ${{secrets.OPENAI_API_KEY}}
run: pnpm test

publish:
Expand Down
5 changes: 5 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,10 @@
# Changelog

## [Unreleased]

- change return type of `build` to `Promise<IPromptObject>`
- add `lt.completions.create` method to directly call openAI SDK with the output of `build` method

# 0.3.1

- fix next.js compatibility
Expand Down
29 changes: 25 additions & 4 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -162,18 +162,19 @@ const playgroundState = await lt.get({
render your template and builds the final open AI compatible payload:

```ts
const openAiBody = lt.build(playgroundState, {
const preparedPrompt = lt.build(playgroundState, {
stream: true,
variables: {
topic: "iron man",
},
})
```

openAiBody now contains this object:
preparedPrompt now contains this object:

```js
{
"stream": true,
"frequency_penalty": 0,
"max_tokens": 800,
"messages": [
Expand All @@ -194,9 +195,29 @@ Notice that your langtail template was replaced with a variable passed in. You c
```ts
import OpenAI from "openai"

const openai = new OpenAI()
const openAI = new OpenAI()

const joke = await openai.chat.completions.create(openAiBody)
const jokeCompletion = await openAI.chat.completions.create(
preparedPrompt.toOpenAI(),
)
```

This way you are still using langtail prompts without exposing potentially sensitive data in your variables.
In case you wish to use the proxyless feature and keep having your logs visible in langtail, use the `preparedPrompt` in the `lt.completions.create` call like this:

### Proxyless with Langtail logs

```ts
const lt = new LangtailPrompts({
apiKey: "<LANGTAIL_API_KEY>",
openAIKey: "<OPENAI_API>", // required for the completions.create to work, you can also define it as env variable OPENAI_API_KEY
})

const preparedPrompt = lt.build(playgroundState, {
variables: {
topic: "iron man",
},
})

const jokeCompletion = await lt.completions.create(preparedPrompt) // results in an openAI ChatCompletion and you can see the request in langtail logs. Variables pass
```
3 changes: 3 additions & 0 deletions cspell.json
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,10 @@
"dictionaries": [],
"words": [
"Aragorn",
"datetime",
"Langtail",
"logprobs",
"proxyless",
"undici"
],
"ignoreWords": [],
Expand Down
14 changes: 10 additions & 4 deletions package.json
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
{
"name": "langtail",
"version": "0.3.1",
"version": "0.4.0-beta-0",
"description": "",
"main": "./dist/LangtailNode.js",
"packageManager": "pnpm@8.15.6",
Expand All @@ -22,6 +22,7 @@
"natural language processing",
"gpt-3",
"gpt-4",
"openrouter",
"anthropic"
],
"authors": [
Expand Down Expand Up @@ -54,17 +55,21 @@
"require": "./dist/template.js",
"import": "./dist/template.mjs",
"types": "./dist/template.d.ts"
},
"./dist/dataSchema": {
"require": "./dist/dataSchema.js",
"import": "./dist/dataSchema.mjs",
"types": "./dist/dataSchema.d.ts"
}
},
"files": [
"dist"
],
"dependencies": {
"@asteasolutions/zod-to-openapi": "^7.0.0",
"@langtail/handlebars-evalless": "^0.1.1",
"date-fns": "^3.6.0",
"handlebars": "^4.7.8",
"openai": "^4.43.0",
"openai": "^4.46.1",
"query-string": "^9.0.0",
"zod": "^3.23.8"
},
Expand All @@ -79,7 +84,8 @@
"entryPoints": [
"src/LangtailNode.ts",
"src/template.ts",
"src/getOpenAIBody.ts"
"src/getOpenAIBody.ts",
"src/dataSchema.ts"
]
}
}
27 changes: 5 additions & 22 deletions pnpm-lock.yaml

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

6 changes: 3 additions & 3 deletions src/LangtailNode.spec.ts
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@ import { LangtailNode, baseURL } from "./LangtailNode"
import "dotenv/config"
import { describe, expect, it } from "vitest"
import nock from "nock"
import { openAIStreamingResponseSchema } from "./dataSchema"
import { ChatCompletionChunkSchema } from "./dataSchema"

const lt = new LangtailNode()

Expand All @@ -24,13 +24,13 @@ describe("LangtailNode", () => {
for await (const part of proxyCompletion) {
partCount++

openAIStreamingResponseSchema.parse(part)
ChatCompletionChunkSchema.parse(part)
}

expect(partCount > 1).toBe(true)
})

it("should not record", async (t) => {
it("should not record this completion in the logs", async (t) => {
nock(baseURL) // nock works by intercepting requests at the network level, if open AI switches to undici we will need to intercept differently
.post("/chat/completions")
.reply(200, function (uri, req) {
Expand Down
4 changes: 4 additions & 0 deletions src/LangtailNode.ts
Original file line number Diff line number Diff line change
Expand Up @@ -22,6 +22,10 @@ export interface ILangtailExtraProps extends OpenAiBodyType {
metadata?: Record<string, any>
}

export type ChatCompletionsCreateParams =
| (ChatCompletionCreateParamsStreaming & ILangtailExtraProps)
| (ChatCompletionCreateParamsNonStreaming & ILangtailExtraProps)

export class LangtailNode {
prompts: LangtailPrompts
chat: {
Expand Down
64 changes: 50 additions & 14 deletions src/LangtailPrompts.spec.ts
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@ import "dotenv/config"
import { describe, expect, it } from "vitest"

import { LangtailPrompts } from "./LangtailPrompts"
import { openAIStreamingResponseSchema } from "./dataSchema"
import { ChatCompletionChunkSchema } from "./dataSchema"

const lt = new LangtailPrompts({
apiKey: process.env.LANGTAIL_API_KEY!,
Expand All @@ -17,12 +17,23 @@ describe(
it("should return the correct path for project prompt", () => {
const path = lt._createPromptPath({
prompt: "prompt",
environment: "staging",
environment: "preview",
version: "6vy19bmp",
})

expect(path).toBe(
"https://api.langtail.com/project-prompt/prompt/staging?v=6vy19bmp",
"https://api.langtail.com/project-prompt/prompt/preview?v=6vy19bmp",
)
})

it("staging with no version parameter", () => {
const path = lt._createPromptPath({
prompt: "prompt",
environment: "staging",
})

expect(path).toBe(
"https://api.langtail.com/project-prompt/prompt/staging",
)
})

Expand All @@ -35,23 +46,23 @@ describe(

const path = ltProject._createPromptPath({
prompt: "prompt",
environment: "staging",
environment: "preview",
version: "6vy19bmp",
})

expect(path).toBe(
"https://api.langtail.com/some-workspace/ci-tests-project/prompt/staging?v=6vy19bmp",
"https://api.langtail.com/some-workspace/ci-tests-project/prompt/preview?v=6vy19bmp",
)

const pathForPromptConfig = ltProject._createPromptPath({
prompt: "prompt",
environment: "staging",
environment: "preview",
version: "6vy19bmp",
configGet: true,
})

expect(pathForPromptConfig).toBe(
"https://api.langtail.com/some-workspace/ci-tests-project/prompt/staging?open-ai-completion-config-payload=true&v=6vy19bmp",
"https://api.langtail.com/some-workspace/ci-tests-project/prompt/preview?open-ai-completion-config-payload=true&v=6vy19bmp",
)
})
})
Expand Down Expand Up @@ -98,11 +109,12 @@ describe(
},
stream: true,
})

let partCount = 0
for await (const part of proxyCompletion) {
partCount++

openAIStreamingResponseSchema.parse(part)
ChatCompletionChunkSchema.parse(part)
}

expect(partCount > 1).toBe(true)
Expand Down Expand Up @@ -153,17 +165,17 @@ describe(

describe("build", () => {
const ltLocal = new LangtailPrompts({
// baseURL: "https://api-staging.langtail.com",
// baseURL: "https://api-staging.langtail.com",// uncomment this line to test against local prompt api
apiKey: process.env.LANGTAIL_API_KEY!,
})

it("should return the openAI body user can use with openai client", async () => {
const playgroundState = await ltLocal.get({
const prompt = await ltLocal.get({
prompt: "optional-var-test",
environment: "preview",
version: "c8hrwdiz",
})
expect(playgroundState).toMatchInlineSnapshot(`
expect(prompt).toMatchInlineSnapshot(`
{
"chatInput": {
"optionalExtra": "",
Expand Down Expand Up @@ -195,15 +207,14 @@ describe(
}
`)

const openAiBody = ltLocal.build(playgroundState, {
const promptObj = ltLocal.build(prompt, {
stream: true,
variables: {
optionalExtra: "This is an optional extra",
},

})

expect(openAiBody).toMatchInlineSnapshot(`
expect(promptObj.toOpenAI()).toMatchInlineSnapshot(`
{
"frequency_penalty": 0,
"max_tokens": 800,
Expand All @@ -224,6 +235,31 @@ describe(
`)
})
})

describe("completion proxyless use case", () => {
it("should return completion for ai clock prompt", async () => {
const ltLocal = new LangtailPrompts({
baseURL: "https://api-staging.langtail.com",
apiKey:
"lt-eyJ0eXAiOiJKV1QiLCJhbGciOiJFUzI1NiJ9.eyJpc3MiOiJsYW5ndGFpbC1hcGkiLCJzdWIiOiJjbG11cTVndW8wMDA0bDkwOHZvbjFvMjhmIiwianRpIjoiY2x1MThrczg0MDAwMTl1Y2JsOGFueHl5ZCIsInJhdGVMaW1pdCI6bnVsbCwiaWF0IjoxNzExMDI1Nzg5fQ.pXT-4CsIenb1VchGaSMxfn7ZBeQHdASWGSs-r7Ryk9uVrfgk7ju5bFDRHWY9N6ua42SrwTx75m5u6Un4wxONUQ",
})

const promptPlaygroundState = await ltLocal.get({
prompt: "ai-clock",
environment: "preview",
version: "yjxvsqwx",
})
const preparedPrompt = ltLocal.build(promptPlaygroundState, {
variables: {
time: "13:11",
},
})

const completion = await ltLocal.completions.create(preparedPrompt)

expect(completion.choices[0].message.content.length > 10).toBeTruthy()
})
})
},
{ timeout: 20000 },
)
Loading
Loading