Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Docs: Flesh out AI docs and move to top level nav #7317

Merged
merged 19 commits into from
May 15, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
File renamed without changes
54 changes: 14 additions & 40 deletions docs/guides/ai/index.rst → docs/ai/index.rst
Original file line number Diff line number Diff line change
@@ -1,9 +1,17 @@
.. _ref_guide_ai:
.. _ref_ai_overview:

==
AI
==

.. toctree::
:hidden:
:maxdepth: 3

javascript
python
reference

:edb-alt-title: Using EdgeDB AI

EdgeDB AI allows you to ship AI-enabled apps with practically no effort. It
Expand Down Expand Up @@ -126,21 +134,8 @@ to start running queries!
(``text-embedding-3-small``) is an OpenAI model, so it will require an
OpenAI provider to be configured as described above.

You may use any of these pre-configured embedding generation models:

**OpenAI**

* ``text-embedding-3-small``
* ``text-embedding-3-large``
* ``text-embedding-ada-002``

`Learn more about the OpenAI embedding models <https://platform.openai.com/docs/guides/embeddings/embedding-models>`__

**Mistral**

* ``mistral-embed``

`Learn more about the Mistral embedding model <https://docs.mistral.ai/capabilities/embeddings/#mistral-embeddings-api>`__
You may use any of :ref:`our pre-configured embedding generation models
<ref_ai_reference_embedding_models>`.

You may want to include multiple properties in your AI index. Fortunately, you
can define an AI index on an expression:
Expand Down Expand Up @@ -171,6 +166,7 @@ Simple, but you'll still need to generate embeddings from your query or pass in
existing embeddings. If your ultimate goal is retrieval-augmented generation
(i.e., RAG), we've got you covered.

.. _ref_ai_overview_rag:

Use RAG via HTTP
----------------
Expand Down Expand Up @@ -203,30 +199,8 @@ add ``"stream": true`` to your request JSON.
(``gpt-4-turbo-preview``) is an OpenAI model, so it will require an OpenAI
provider to be configured as described above.

You may use any of these text generation models:

**OpenAI**

* ``gpt-3.5-turbo``
* ``gpt-4-turbo-preview``

`Learn more about the OpenAI text generation models <https://platform.openai.com/docs/guides/text-generation>`__

**Mistral**

* ``mistral-small-latest``
* ``mistral-medium-latest``
* ``mistral-large-latest``

`Learn more about the Mistral text generation models <https://docs.mistral.ai/getting-started/models/>`__

**Anthropic**

* ``claude-3-haiku-20240307``
* ``claude-3-sonnet-20240229``
* ``claude-3-opus-20240229``

`Learn more about the Athropic text generation models <https://docs.anthropic.com/claude/docs/models-overview>`__
You may use any of our supported :ref:`text generation models
<ref_ai_reference_text_generation_models>`.


Use RAG via JavaScript
Expand Down
173 changes: 173 additions & 0 deletions docs/ai/javascript.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,173 @@
.. _ref_ai_javascript:

==========
JavaScript
==========

:edb-alt-title: EdgeDB AI's JavaScript package

``@edgedb/ai`` offers a convenient wrapper around ``ext::ai``. Install it with
npm or via your package manager of choice:

.. code-block:: bash

$ npm install @edgedb/ai # or
$ yarn add @edgedb/ai # or
$ pnpm add @edgedb/ai # or
$ bun add @edgedb/ai


Usage
=====

Start by importing ``createClient`` from ``edgedb`` and ``createAI`` from
``@edgedb/ai``:

.. code-block:: typescript

import { createClient } from "edgedb";
import { createAI } from "@edgedb/ai";

Create an EdgeDB client. Create an instance of the AI client by passing in the
EdgeDB client and any options for the AI provider (like the text generation
model):

.. code-block:: typescript

const client = createClient();

const gpt4Ai = createAI(client, {
model: "gpt-4-turbo-preview",
});
raddevon marked this conversation as resolved.
Show resolved Hide resolved

You may use any of the supported :ref:`text generation models
<ref_ai_reference_text_generation_models>`. Add your query as context:

.. code-block:: typescript

const astronomyAi = gpt4Ai.withContext({
query: "Astronomy"
});

This "query" property doesn't have to be a proper query at all. It can be any
expression that produces a set of objects, like ``Astronomy`` in the example
above which will return all objects of that type. On the other hand, if you
want to narrow the field more, you can give it a query like ``select Astronomy
filter .topic = "Mars"``.

The default text generation prompt will ask your selected provider to limit
answer to information provided in the context and will pass the queried
objects' AI index as context along with that prompt.

Call your AI client's ``queryRag`` method, passing in a text query.

.. code-block:: typescript

console.log(
await astronomyAi.queryRag("What color is the sky on Mars?")
);

You can chain additional calls of ``withContext`` or ``withConfig`` to create
additional AI clients, identical except for the newly specified values.

.. code-block:: typescript

const fastAstronomyAi = astronomyAi.withConfig({
model: "gpt-3.5-turbo",
});
console.log(
await fastAstronomyAi.queryRag("What color is the sky on Mars?")
);

const fastChemistryAi = fastAstronomyAi.withContext({
query: "Chemistry"
});
console.log(
await fastChemistryAi.queryRag("What is the atomic number of gold?")
);


API Reference
=============

.. js:function:: createAI( \
client: Client, \
options: Partial<AIOptions> = {} \
): EdgeDBAI

Creates an instance of ``EdgeDBAI`` with the specified client and options.

:param client:
An EdgeDB client instance.

:param string options.model:
Required. Specifies the AI model to use. This could be a version of GPT
or any other model supported by EdgeDB AI.

:param options.prompt:
Optional. Defines the input prompt for the AI model. The prompt can be
a simple string, an ID referencing a stored prompt, or a custom prompt
structure that includes roles and content for more complex
interactions. The default is the built-in system prompt.


EdgeDBAI
--------

Instances of ``EdgeDBAI`` offer methods for client configuration and utilizing
RAG.

Public methods
^^^^^^^^^^^^^^

.. js:method:: withConfig(options: Partial<AIOptions>): EdgeDBAI

Returns a new ``EdgeDBAI`` instance with updated configuration options.

:param string options.model:
Required. Specifies the AI model to use. This could be a version of GPT
or any other model supported by EdgeDB AI.

:param options.prompt:
Optional. Defines the input prompt for the AI model. The prompt can be
a simple string, an ID referencing a stored prompt, or a custom prompt
structure that includes roles and content for more complex
interactions. The default is the built-in system prompt.

.. js:method:: withContext(context: Partial<QueryContext>): EdgeDBAI

Returns a new ``EdgeDBAI`` instance with an updated query context.

:param string context.query:
Required. Specifies an expression to determine the relevant objects and
index to serve as context for text generation. You may set this to any
expression that produces a set of objects, even if it is not a
standalone query.
:param string context.variables:
Optional. Variable settings required for the context query.
:param string context.globals:
Optional. Variable settings required for the context query.
:param number context.max_object_count:
Optional. A maximum number of objects to return from the context query.

.. js:method:: async queryRag( \
message: string, \
context: QueryContext = this.context \
): Promise<string>

Sends a query with context to the configured AI model and returns the
response as a string.

:param string message:
Required. The message to be sent to the text generation provider's API.
:param string context.query:
Required. Specifies an expression to determine the relevant objects and
index to serve as context for text generation. You may set this to any
expression that produces a set of objects, even if it is not a
standalone query.
:param string context.variables:
Optional. Variable settings required for the context query.
:param string context.globals:
Optional. Variable settings required for the context query.
:param number context.max_object_count:
Optional. A maximum number of objects to return from the context query.
Loading