Skip to content

Commit

Permalink
feat(api): reference claude-2 in examples (#50)
Browse files Browse the repository at this point in the history
Co-authored-by: Stainless Bot <dev@stainlessapi.com>
  • Loading branch information
RobertCraigie and stainless-bot committed Jul 11, 2023
1 parent 8dfff26 commit 7c53ded
Show file tree
Hide file tree
Showing 2 changed files with 17 additions and 95 deletions.
106 changes: 14 additions & 92 deletions src/resources/completions.ts
Expand Up @@ -66,51 +66,11 @@ export namespace CompletionCreateParams {
* The model that will complete your prompt.
*
* As we improve Claude, we develop new versions of it that you can query. This
* controls which version of Claude answers your request. Right now we are offering
* two model families: Claude and Claude Instant.
*
* Specifiying any of the following models will automatically switch to you the
* newest compatible models as they are released:
*
* - `"claude-1"`: Our largest model, ideal for a wide range of more complex tasks.
* - `"claude-1-100k"`: An enhanced version of `claude-1` with a 100,000 token
* (roughly 75,000 word) context window. Ideal for summarizing, analyzing, and
* querying long documents and conversations for nuanced understanding of complex
* topics and relationships across very long spans of text.
* - `"claude-instant-1"`: A smaller model with far lower latency, sampling at
* roughly 40 words/sec! Its output quality is somewhat lower than the latest
* `claude-1` model, particularly for complex tasks. However, it is much less
* expensive and blazing fast. We believe that this model provides more than
* adequate performance on a range of tasks including text classification,
* summarization, and lightweight chat applications, as well as search result
* summarization.
* - `"claude-instant-1-100k"`: An enhanced version of `claude-instant-1` with a
* 100,000 token context window that retains its performance. Well-suited for
* high throughput use cases needing both speed and additional context, allowing
* deeper understanding from extended conversations and documents.
*
* You can also select specific sub-versions of the above models:
*
* - `"claude-1.3"`: Compared to `claude-1.2`, it's more robust against red-team
* inputs, better at precise instruction-following, better at code, and better
* and non-English dialogue and writing.
* - `"claude-1.3-100k"`: An enhanced version of `claude-1.3` with a 100,000 token
* (roughly 75,000 word) context window.
* - `"claude-1.2"`: An improved version of `claude-1`. It is slightly improved at
* general helpfulness, instruction following, coding, and other tasks. It is
* also considerably better with non-English languages. This model also has the
* ability to role play (in harmless ways) more consistently, and it defaults to
* writing somewhat longer and more thorough responses.
* - `"claude-1.0"`: An earlier version of `claude-1`.
* - `"claude-instant-1.1"`: Our latest version of `claude-instant-1`. It is better
* than `claude-instant-1.0` at a wide variety of tasks including writing,
* coding, and instruction following. It performs better on academic benchmarks,
* including math, reading comprehension, and coding tests. It is also more
* robust against red-teaming inputs.
* - `"claude-instant-1.1-100k"`: An enhanced version of `claude-instant-1.1` with
* a 100,000 token context window that retains its lightning fast 40 word/sec
* performance.
* - `"claude-instant-1.0"`: An earlier version of `claude-instant-1`.
* parameter controls which version of Claude answers your request. Right now we
* are offering two model families: Claude, and Claude Instant. You can use them by
* setting `model` to `"claude-2"` or `"claude-instant-1"`, respectively. See
* [models](https://docs.anthropic.com/claude/reference/selecting-a-model) for
* additional details.
*/
model: string;

Expand All @@ -124,7 +84,8 @@ export namespace CompletionCreateParams {
* const prompt = `\n\nHuman: ${userQuestion}\n\nAssistant:`;
* ```
*
* See our [comments on prompts](https://console.anthropic.com/docs/prompt-design)
* See our
* [comments on prompts](https://docs.anthropic.com/claude/docs/introduction-to-prompt-design)
* for more context.
*/
prompt: string;
Expand Down Expand Up @@ -208,51 +169,11 @@ export namespace CompletionCreateParams {
* The model that will complete your prompt.
*
* As we improve Claude, we develop new versions of it that you can query. This
* controls which version of Claude answers your request. Right now we are offering
* two model families: Claude and Claude Instant.
*
* Specifiying any of the following models will automatically switch to you the
* newest compatible models as they are released:
*
* - `"claude-1"`: Our largest model, ideal for a wide range of more complex tasks.
* - `"claude-1-100k"`: An enhanced version of `claude-1` with a 100,000 token
* (roughly 75,000 word) context window. Ideal for summarizing, analyzing, and
* querying long documents and conversations for nuanced understanding of complex
* topics and relationships across very long spans of text.
* - `"claude-instant-1"`: A smaller model with far lower latency, sampling at
* roughly 40 words/sec! Its output quality is somewhat lower than the latest
* `claude-1` model, particularly for complex tasks. However, it is much less
* expensive and blazing fast. We believe that this model provides more than
* adequate performance on a range of tasks including text classification,
* summarization, and lightweight chat applications, as well as search result
* summarization.
* - `"claude-instant-1-100k"`: An enhanced version of `claude-instant-1` with a
* 100,000 token context window that retains its performance. Well-suited for
* high throughput use cases needing both speed and additional context, allowing
* deeper understanding from extended conversations and documents.
*
* You can also select specific sub-versions of the above models:
*
* - `"claude-1.3"`: Compared to `claude-1.2`, it's more robust against red-team
* inputs, better at precise instruction-following, better at code, and better
* and non-English dialogue and writing.
* - `"claude-1.3-100k"`: An enhanced version of `claude-1.3` with a 100,000 token
* (roughly 75,000 word) context window.
* - `"claude-1.2"`: An improved version of `claude-1`. It is slightly improved at
* general helpfulness, instruction following, coding, and other tasks. It is
* also considerably better with non-English languages. This model also has the
* ability to role play (in harmless ways) more consistently, and it defaults to
* writing somewhat longer and more thorough responses.
* - `"claude-1.0"`: An earlier version of `claude-1`.
* - `"claude-instant-1.1"`: Our latest version of `claude-instant-1`. It is better
* than `claude-instant-1.0` at a wide variety of tasks including writing,
* coding, and instruction following. It performs better on academic benchmarks,
* including math, reading comprehension, and coding tests. It is also more
* robust against red-teaming inputs.
* - `"claude-instant-1.1-100k"`: An enhanced version of `claude-instant-1.1` with
* a 100,000 token context window that retains its lightning fast 40 word/sec
* performance.
* - `"claude-instant-1.0"`: An earlier version of `claude-instant-1`.
* parameter controls which version of Claude answers your request. Right now we
* are offering two model families: Claude, and Claude Instant. You can use them by
* setting `model` to `"claude-2"` or `"claude-instant-1"`, respectively. See
* [models](https://docs.anthropic.com/claude/reference/selecting-a-model) for
* additional details.
*/
model: string;

Expand All @@ -266,7 +187,8 @@ export namespace CompletionCreateParams {
* const prompt = `\n\nHuman: ${userQuestion}\n\nAssistant:`;
* ```
*
* See our [comments on prompts](https://console.anthropic.com/docs/prompt-design)
* See our
* [comments on prompts](https://docs.anthropic.com/claude/docs/introduction-to-prompt-design)
* for more context.
*/
prompt: string;
Expand Down
6 changes: 3 additions & 3 deletions tests/api-resources/completions.test.ts
Expand Up @@ -12,20 +12,20 @@ describe('resource completions', () => {
test('create: only required params', async () => {
const response = await anthropic.completions.create({
max_tokens_to_sample: 256,
model: 'claude-1',
model: 'claude-2',
prompt: '\n\nHuman: Hello, world!\n\nAssistant:',
});
});

test('create: required and optional params', async () => {
const response = await anthropic.completions.create({
max_tokens_to_sample: 256,
model: 'claude-1',
model: 'claude-2',
prompt: '\n\nHuman: Hello, world!\n\nAssistant:',
metadata: { user_id: '13803d75-b4b5-4c3e-b2a2-6f21399b021b' },
stop_sequences: ['string', 'string', 'string'],
stream: false,
temperature: 0.7,
temperature: 1,
top_k: 5,
top_p: 0.7,
});
Expand Down

0 comments on commit 7c53ded

Please sign in to comment.