Skip to content

Commit

Permalink
Add Anthropic support (#165)
Browse files Browse the repository at this point in the history
See the docs changes for more details.
  • Loading branch information
NickHeiner committed Jun 29, 2023
1 parent 89c87a8 commit 92b6e0f
Show file tree
Hide file tree
Showing 13 changed files with 624 additions and 88 deletions.
11 changes: 11 additions & 0 deletions packages/ai-jsx/package.json
Expand Up @@ -180,6 +180,16 @@
"default": "./dist/cjs/lib/openai.cjs"
}
},
"./lib/anthropic": {
"import": {
"types": "./dist/esm/lib/anthropic.d.ts",
"default": "./dist/esm/lib/anthropic.js"
},
"require": {
"types": "./dist/cjs/lib/anthropic.d.cts",
"default": "./dist/cjs/lib/anthropic.cjs"
}
},
"./react": {
"import": {
"types": "./dist/esm/react/index.d.ts",
Expand Down Expand Up @@ -291,6 +301,7 @@
"react": "^16.8.0 || ^17.0.0 || ^18.0.0"
},
"dependencies": {
"@anthropic-ai/sdk": "^0.5.0",
"@nick.heiner/openai-edge": "1.0.1-7",
"@nick.heiner/wandb-fork": "^0.5.2-5",
"axios": "^1.4.0",
Expand Down
89 changes: 88 additions & 1 deletion packages/ai-jsx/readme.md
@@ -1 +1,88 @@
# See [the root-level README](../../readme.md).
# AI.JSX — The AI Application Framework for Javascript

[![Docs Site](https://img.shields.io/badge/Docs%20Site-docs.ai--jsx.com-orange)](https://docs.ai-jsx.com)
[![Discord Follow](https://dcbadge.vercel.app/api/server/MsKAeKF8kU?style=flat)](https://discord.gg/MsKAeKF8kU)
[![Twitter Follow](https://img.shields.io/twitter/follow/fixieai?style=social)](https://twitter.com/fixieai)

[<img src="../docs/static/img/loom.png" alt="Image description" width="50%">](https://www.loom.com/share/c13b0c73f8d34f9f962048b39a4794f6?sid=a693f220-ccb1-4e70-913a-c2eda86dd0ce)

AI.JSX is a framework for building AI applications using Javascript and [JSX](https://react.dev/learn/writing-markup-with-jsx). While AI.JSX [is not React](https://docs.ai-jsx.com/is-it-react), it's designed to look and feel very similar while also integrating seamlessly with React-based projects. With AI.JSX, you don't just use JSX to describe what your UI should look like, you also use it to describe how **Large Language Models**, such as ChatGPT, should integrate into the rest of your application. The end result is a powerful combination where _intelligence_ can be deeply embedded into the application stack.

AI.JSX is designed to give you two important capabilities out of the box:

1. An intuitive mechanism for orchestrating multiple LLM calls expressed as modular, re-usable components.
1. The ability to seamlessly interweave UI components with your AI components. This means you can rely on the LLM to construct your UI dynamically from a set of provided React components.

AI.JSX can be used to create standalone LLM applications that can be deployed anywhere Node.JS is supported, or it can be used as part of a larger React application. For an example of how to integrate AI.JSX into a React project, see the [NextJS demo package](/packages/nextjs-demo/) or [follow the tutorial](https://docs.ai-jsx.com/tutorial/part5). For an overview of all deployment architectures, see the [architecture overview](https://docs.ai-jsx.com/guides/architecture).

For more details on how AI.JSX works with React in general, see our [AI+UI guide](https://docs.ai-jsx.com/guides/ai-ui).

## Quickstart

1. Follow the [Getting Started Guide](https://docs.ai-jsx.com/getting-started)
1. Run through the [tutorial](https://docs.ai-jsx.com/category/tutorial)
1. Clone our [Hello World template](https://github.com/fixie-ai/ai-jsx-template) to start hacking
1. Check out the different examples in the [examples package](https://github.com/fixie-ai/ai-jsx/tree/main/packages/examples)
1. If you're new to AI, read the [Guide for AI Newcomers](https://docs.ai-jsx.com/guides/brand-new)

## Examples

You can play with live demos on our [live demo app](https://ai-jsx-nextjs-demo.vercel.app/) ([source](../nextjs-demo/)).

Here is a simple example using AI.JSX to generate an AI response to a prompt:

```tsx
import * as AI from 'ai-jsx';
import { ChatCompletion, UserMessage } from 'ai-jsx/core/completion';

const app = (
<ChatCompletion>
<UserMessage>Write a Shakespearean sonnet about AI models.</UserMessage>
</ChatCompletion>
);
const renderContext = AI.createRenderContext();
const response = await renderContext.render(app);
console.log(response);
```

Here's a more complex example that uses the built-in `<Inline>` component to progressively generate multiple fields in a JSON object:

```tsx
function CharacterGenerator() {
const inlineCompletion = (prompt: Node) => (
<Completion stop={['"']} temperature={1.0}>
{prompt}
</Completion>
);

return (
<Inline>
Generate a character profile for a fantasy role-playing game in JSON format.{'\n'}
{'{'}
{'\n '}"name": "{inlineCompletion}",
{'\n '}"class": "{inlineCompletion}",
{'\n '}"race": "{inlineCompletion}",
{'\n '}"alignment": "{inlineCompletion}",
{'\n '}"weapons": "{inlineCompletion}",
{'\n '}"spells": "{inlineCompletion}",
{'\n}'}
</Inline>
);
}
```

For a full set of examples, see [the examples package](https://github.com/fixie-ai/ai-jsx/tree/main/packages/examples).

## Features

- Prompt engineering through modular, reusable components
- The ability to easily switch between model providers and LLM configurations (e.g., temperature)
- Built in support for Tools (ReAct pattern), Document Question and Answering, Chain of Thought, and more
- Ability to directly interweave LLM calls with standard UI components, including the ability for the LLM to render the UI from a set of provided components
- Built-in streaming support
- First-class support for NextJS and Create React App (more coming soon)
- Full support for LangChainJS

## Contributing

We welcome contributions! See [Contributing](../docs/docs/contributing/index.md) for how to get started.
13 changes: 12 additions & 1 deletion packages/ai-jsx/src/core/completion.tsx
Expand Up @@ -9,6 +9,7 @@ import { Node, Component, RenderContext } from '../index.js';
import { AIJSXError, ErrorCode } from '../core/errors.js';
import { OpenAIChatModel, OpenAICompletionModel } from '../lib/openai.js';
import { getEnvVar } from '../lib/util.js';
import { AnthropicChatModel } from '../lib/anthropic.js';

/**
* Represents properties passed to a given Large Language Model.
Expand Down Expand Up @@ -90,12 +91,22 @@ function AutomaticChatModel({ children, ...props }: ModelPropsWithChildren) {
</OpenAIChatModel>
);
}

if (getEnvVar('ANTHROPIC_API_KEY', false)) {
return (
<AnthropicChatModel model="claude-instant-1" {...props}>
{children}
</AnthropicChatModel>
);
}

throw new AIJSXError(
`No chat model was specified. To fix this, do one of the following:
1. Set the OPENAI_API_KEY or REACT_APP_OPENAI_API_KEY environment variable.
2. Set the OPENAI_API_BASE or REACT_APP_OPENAI_API_BASE environment variable.
3. use an explicit ChatProvider component.`,
3. Set the ANTHROPIC_API_KEY or REACT_APP_ANTHROPIC_API_KEY environment variable.
4. use an explicit ChatProvider component.`,
ErrorCode.MissingChatModel,
'user'
);
Expand Down
4 changes: 4 additions & 0 deletions packages/ai-jsx/src/core/errors.ts
Expand Up @@ -20,6 +20,10 @@ export enum ErrorCode {
NestedAIUIStreamsAreNotSupported = 1015,
UnknownUIComponentId = 1016,
UnknownSerializedComponentType = 1017,
AnthropicDoesNotSupportCompletionModels = 1018,
AnthropicDoesNotSupportSystemMessage = 1019,
AnthropicDoesNotSupportFunctions = 1020,
AnthropicAPIError = 1021,

ModelOutputDidNotMatchConstraint = 2000,

Expand Down
183 changes: 183 additions & 0 deletions packages/ai-jsx/src/lib/anthropic.tsx
@@ -0,0 +1,183 @@
import AnthropicSDK from '@anthropic-ai/sdk';
import { getEnvVar } from './util.js';
import * as AI from '../index.js';
import { Node } from '../index.js';
import { ChatOrCompletionModelOrBoth } from './model.js';
import {
AssistantMessage,
ChatProvider,
FunctionCall,
FunctionResponse,
ModelProps,
SystemMessage,
UserMessage,
ModelPropsWithChildren,
} from '../core/completion.js';
import { AIJSXError, ErrorCode } from '../core/errors.js';

export const anthropicClientContext = AI.createContext<AnthropicSDK>(
new AnthropicSDK({
apiKey: getEnvVar('ANTHROPIC_API_KEY', false),
})
);

type ValidCompletionModel = never;
/**
* The set of valid Claude models.
*
* @see https://docs.anthropic.com/claude/reference/complete_post.
*/
type ValidChatModel =
| 'claude-1'
| 'claude-1-100k'
| 'claude-instant-1'
| 'claude-instant-1-100k'
| 'claude-1.3'
| 'claude-1.3-100k'
| 'claude-1.2'
| 'claude-1.0'
| 'claude-instant-1.1'
| 'claude-instant-1.1-100k'
| 'claude-instant-1.0';

type AnthropicModelChoices = ChatOrCompletionModelOrBoth<ValidChatModel, ValidCompletionModel>;

/**
* If you use an Anthropic model without specifying the max tokens for the completion, this value will be used as the default.
*/
export const defaultMaxTokens = 1000;

/**
* An AI.JSX component that invokes an Anthropic Large Language Model.
* @param children The children to render.
* @param chatModel The chat model to use.
* @param completionModel The completion model to use.
* @param client The Anthropic client.
*/
export function Anthropic({
children,
chatModel,
completionModel,
client,
...defaults
}: { children: Node; client?: AnthropicSDK } & AnthropicModelChoices & ModelProps) {
let result = children;

if (client) {
result = <anthropicClientContext.Provider value={client}>{children}</anthropicClientContext.Provider>;
}

if (chatModel) {
result = (
<ChatProvider component={AnthropicChatModel} {...defaults} model={chatModel}>
{result}
</ChatProvider>
);
}

// TS is correct that this should never happen, but we'll check for it anyway.
// eslint-disable-next-line @typescript-eslint/no-unnecessary-condition
if (completionModel) {
throw new AIJSXError(
'Completion models are not supported by Anthropic',
ErrorCode.AnthropicDoesNotSupportCompletionModels,
'user'
);
}

return result;
}

interface AnthropicChatModelProps extends ModelPropsWithChildren {
model: ValidChatModel;
}
export async function* AnthropicChatModel(
props: AnthropicChatModelProps,
{ render, getContext, logger }: AI.ComponentContext
): AI.RenderableStream {
const messageElements = await render(props.children, {
stop: (e) =>
e.tag == SystemMessage ||
e.tag == UserMessage ||
e.tag == AssistantMessage ||
e.tag == FunctionCall ||
e.tag == FunctionResponse,
});
yield AI.AppendOnlyStream;
const messages = await Promise.all(
messageElements.filter(AI.isElement).map(async (message) => {
switch (message.tag) {
case UserMessage:
return `${AnthropicSDK.HUMAN_PROMPT}: ${await render(message)}`;
case AssistantMessage:
return `${AnthropicSDK.AI_PROMPT}: ${await render(message)}`;
case SystemMessage:
throw new AIJSXError(
'Anthropic models do not support SystemMessage. Change your user message to instruct the model what to do.',
ErrorCode.AnthropicDoesNotSupportSystemMessage,
'user'
);
case FunctionCall:
case FunctionResponse:
throw new AIJSXError(
'Anthropic models do not support functions.',
ErrorCode.AnthropicDoesNotSupportFunctions,
'user'
);
default:
throw new AIJSXError(
`ChatCompletion's prompts must be UserMessage or AssistantMessage, but this child was ${message.tag.name}`,
ErrorCode.ChatCompletionUnexpectedChild,
'internal'
);
}
})
);

if (!messages.length) {
throw new AIJSXError(
"ChatCompletion must have at least one child that's UserMessage or AssistantMessage, but no such children were found.",
ErrorCode.ChatCompletionMissingChildren,
'user'
);
}

messages.push(AnthropicSDK.AI_PROMPT);

const anthropic = getContext(anthropicClientContext);
const anthropicCompletionRequest: AnthropicSDK.CompletionCreateParams = {
prompt: messages.join('\n\n'),
max_tokens_to_sample: props.maxTokens ?? defaultMaxTokens,
temperature: props.temperature,
model: props.model,
stop_sequences: props.stop,
stream: true,
};

logger.debug({ anthropicCompletionRequest }, 'Calling createCompletion');

let response: Awaited<ReturnType<typeof anthropic.completions.create>>;
try {
response = await anthropic.completions.create(anthropicCompletionRequest);
} catch (err) {
if (err instanceof AnthropicSDK.APIError) {
throw new AIJSXError(
err.message,
ErrorCode.AnthropicAPIError,
'runtime',
Object.fromEntries(Object.entries(err))
);
}
throw err;
}
let resultSoFar = '';
for await (const completion of response) {
resultSoFar += completion.completion;
logger.trace({ completion }, 'Got Anthropic stream event');
yield completion.completion;
}

logger.debug({ completion: resultSoFar }, 'Anthropic completion finished');

return AI.AppendOnlyStream;
}
8 changes: 8 additions & 0 deletions packages/ai-jsx/src/lib/model.tsx
@@ -0,0 +1,8 @@
/**
* Helper for model components. This type is used to create prop types that must include at least a chatModel or a completionModel.
*
* @hidden
*/
export type ChatOrCompletionModelOrBoth<ValidChatModel extends string, ValidCompletionModel extends string> =
| { chatModel: ValidChatModel; completionModel?: ValidCompletionModel }
| { chatModel?: ValidChatModel; completionModel: ValidCompletionModel };
17 changes: 7 additions & 10 deletions packages/ai-jsx/src/lib/openai.tsx
Expand Up @@ -35,6 +35,7 @@ import { Merge } from 'type-fest';
import { Logger } from '../core/log.js';
import { HttpError, AIJSXError, ErrorCode } from '../core/errors.js';
import { getEnvVar } from './util.js';
import { ChatOrCompletionModelOrBoth } from './model.js';

// https://platform.openai.com/docs/models/model-endpoint-compatibility
type ValidCompletionModel =
Expand All @@ -46,14 +47,12 @@ type ValidCompletionModel =

type ValidChatModel = 'gpt-4' | 'gpt-4-0314' | 'gpt-4-32k' | 'gpt-4-32k-0314' | 'gpt-3.5-turbo' | 'gpt-3.5-turbo-0301';

type ChatOrCompletionModelOrBoth =
| { chatModel: ValidChatModel; completionModel?: ValidCompletionModel }
| { chatModel?: ValidChatModel; completionModel: ValidCompletionModel };
type OpenAIModelChoices = ChatOrCompletionModelOrBoth<ValidChatModel, ValidCompletionModel>;

const decoder = new TextDecoder();

function createOpenAIClient() {
return new OpenAIApi(
export const openAiClientContext = AI.createContext<OpenAIApi>(
new OpenAIApi(
new Configuration({
apiKey: getEnvVar('OPENAI_API_KEY', false),
}),
Expand All @@ -63,10 +62,8 @@ function createOpenAIClient() {
getEnvVar('OPENAI_API_BASE', false) || undefined,
// TODO: Figure out a better way to work around NextJS fetch blocking streaming
(globalThis as any)._nextOriginalFetch ?? globalThis.fetch
);
}

export const openAiClientContext = AI.createContext<OpenAIApi>(createOpenAIClient());
)
);

/**
* An AI.JSX component that invokes an OpenAI Large Language Model.
Expand All @@ -81,7 +78,7 @@ export function OpenAI({
completionModel,
client,
...defaults
}: { children: Node; client?: OpenAIApi } & ChatOrCompletionModelOrBoth & ModelProps) {
}: { children: Node; client?: OpenAIApi } & OpenAIModelChoices & ModelProps) {
let result = children;

if (client) {
Expand Down

3 comments on commit 92b6e0f

@vercel
Copy link

@vercel vercel bot commented on 92b6e0f Jun 29, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Successfully deployed to the following URLs:

ai-jsx-docs – ./packages/docs

ai-jsx-docs-git-main-fixie-ai.vercel.app
docs.ai-jsx.com
ai-jsx-docs-fixie-ai.vercel.app
ai-jsx-docs.vercel.app

@vercel
Copy link

@vercel vercel bot commented on 92b6e0f Jun 29, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Successfully deployed to the following URLs:

ai-jsx-tutorial-nextjs – ./packages/tutorial-nextjs

ai-jsx-tutorial-nextjs.vercel.app
ai-jsx-tutorial-nextjs-fixie-ai.vercel.app
ai-jsx-tutorial-nextjs-git-main-fixie-ai.vercel.app

@vercel
Copy link

@vercel vercel bot commented on 92b6e0f Jun 29, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Successfully deployed to the following URLs:

ai-jsx-nextjs-demo – ./packages/nextjs-demo

ai-jsx-nextjs-demo-git-main-fixie-ai.vercel.app
ai-jsx-nextjs-demo.vercel.app
ai-jsx-nextjs-demo-fixie-ai.vercel.app

Please sign in to comment.