Skip to content

Commit

Permalink
more docs (#1425)
Browse files Browse the repository at this point in the history
  • Loading branch information
levkk committed Apr 26, 2024
1 parent 4a82a57 commit 2b83a11
Show file tree
Hide file tree
Showing 24 changed files with 443 additions and 333 deletions.
Binary file modified pgml-cms/docs/.gitbook/assets/fdw_1.png
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified pgml-cms/docs/.gitbook/assets/logical_replication_1.png
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
1 change: 0 additions & 1 deletion pgml-cms/docs/SUMMARY.md
Expand Up @@ -41,7 +41,6 @@
* [Zero-shot Classification](api/sql-extension/pgml.transform/zero-shot-classification.md)
* [pgml.tune()](api/sql-extension/pgml.tune.md)
* [Client SDK](api/client-sdk/README.md)
* [Overview](api/client-sdk/getting-started.md)
* [Collections](api/client-sdk/collections.md)
* [Pipelines](api/client-sdk/pipelines.md)
* [Vector Search](api/client-sdk/search.md)
Expand Down
49 changes: 34 additions & 15 deletions pgml-cms/docs/api/apis.md
@@ -1,28 +1,47 @@
# Overview
---
description: Overview of the PostgresML SQL API and SDK.
---

## Introduction
# API overview

PostgresML adds extensions to the PostgreSQL database, as well as providing separate Client SDKs in JavaScript and Python that leverage the database to implement common ML & AI use cases.
PostgresML is a PostgreSQL extension which adds SQL functions to the database where it's installed. The functions work with modern machine learning algorithms and latest open source LLMs while maintaining a stable API signature. They can be used by any application that connects to the database.

The extensions provide all of the ML & AI functionality via SQL APIs, like training and inference. They are designed to be used directly for all ML practitioners who implement dozens of different use cases on their own machine learning models.
In addition to the SQL API, we built and maintain a client SDK for JavaScript, Python and Rust. The SDK uses the same extension functionality to implement common ML & AI use cases, like retrieval-augmented generation (RAG), chatbots, and semantic & hybrid search engines.

We also provide Client SDKs that implement the best practices on top of the SQL APIs, to ease adoption and implement common application use cases in applications, like chatbots or search engines.
Using the SDK is optional, and you can implement the same functionality with standard SQL queries. If you feel more comfortable using a programming language, the SDK can help you to get started quickly.

## SQL Extension
## [SQL extension](sql-extension/)

PostgreSQL is designed to be _**extensible**_. This has created a rich open-source ecosystem of additional functionality built around the core project. Some [extensions](https://www.postgresql.org/docs/current/contrib.html) are include in the base Postgres distribution, but others are also available via the [PostgreSQL Extension Network](https://pgxn.org/).\
There are 2 foundational extensions included in a PostgresML deployment that provide functionality inside the database through SQL APIs.
The PostgreSQL extension provides all of the ML & AI functionality, like training models and inference, via SQL functions. The functions are designed for ML practitioners to use dozens of ML algorithms to train models, and run real time inference, on live application data. Additionally, the extension provides access to the latest Hugging Face transformers for a wide range of NLP tasks.

* **pgml** - provides Machine Learning and Artificial Intelligence APIs with access to more than 50 ML algorithms to train classification, clustering and regression models on your own data, or you can perform dozens of tasks with thousands of models downloaded from HuggingFace.
* **pgvector** - provides indexing and search functionality on vectors, in addition to the traditional application database storage, including JSON and plain text, provided by PostgreSQL.
### Functions

Learn more about developing with the [sql-extension](sql-extension/ "mention")
The following functions are implemented and maintained by the PostgresML extension:

## Client SDK
| Function name | Description |
|------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| [pgml.embed()](sql-extension/pgml.embed) | Generate embeddings inside the database using open source embedding models from Hugging Face. |
| [pgml.transform()](sql-extension/pgml.transform/) | Download and run latest Hugging Face transformer models, like Llama, Mixtral, and many more to perform various NLP tasks like text generation, summarization, sentiment analysis and more. |
| [pgml.train()](sql-extension/pgml.train/) | Train a machine learning model on data from a Postgres table or view. Supports XGBoost, LightGBM, Catboost and all Scikit-learn algorithms. |
| [pgml.deploy()](sql-extension/pgml.deploy) | Deploy a version of the model created with pgml.train(). |
| [pgml.predict()](sql-extension/pgml.predict/) | Perform real time inference using a model trained with pgml.train() on live application data. |
| [pgml.tune()](sql-extension/pgml.tune) | Run LoRA fine tuning on an open source model from Hugging Face using data from a Postgres table or view. |

PostgresML provides a client SDK that streamlines ML & AI use cases in both JavaScript and Python. With this SDK, you can seamlessly manage various database tables related to documents, text chunks, text splitters, LLM (Language Model) models, and embeddings. By leveraging the SDK's capabilities, you can efficiently index LLM embeddings using pgvector with HNSW for fast and accurate queries.
Together with standard database functionality provided by PostgreSQL, these functions allow to create and manage the entire life cycle of a machine learning application.

The SDK delegates all work to the extension running in the database, which minimizes software and hardware dependencies that need to be maintained at the application layer, as well as securing data and models inside the data center. Our SDK minimizes data transfer to maximize performance, efficiency, security and reliability.
## [Client SDK](client-sdk/)

Learn more about developing with the [client-sdk](client-sdk/ "mention")
The client SDK implements best practices and common use cases, using the PostgresML SQL functions and standard PostgreSQL features to do it. The SDK core is written in Rust, which manages creating and running queries, connection pooling, and error handling.

For each additional language we support (current JavaScript and Python), we create and publish language-native bindings. This architecture ensures all programming languages we support have identical APIs and similar performance when interacting with PostgresML.

### Use cases

The SDK currently implements the following use cases:

| Use case | Description |
|----------|---------|
| [Collections](client-sdk/collections) | Manage documents, embeddings, full text and vector search indexes, and more, using one simple interface. |
| [Pipelines](client-sdk/pipelines) | Easily build complex queries to interact with collections using a programmable interface. |
| [Vector search](client-sdk/search) | Implement semantic search using in-database generated embeddings and ANN vector indexes. |
| [Document search](client-sdk/document-search) | Implement hybrid full text search using in-database generated embeddings and PostgreSQL tsvector indexes. |
255 changes: 239 additions & 16 deletions pgml-cms/docs/api/client-sdk/README.md
@@ -1,24 +1,247 @@
---
description: PostgresML client SDK for JavaScript, Python and Rust implements common use cases and PostgresML connection management.
---

# Client SDK

### Key Features
The client SDK can be installed using standard package managers for JavaScript, Python, and Rust. Since the SDK is written in Rust, the JavaScript and Python packages come with no additional dependencies.


## Installation

Installing the SDK into your project is as simple as:

{% tabs %}
{% tab title="JavaScript " %}
```bash
npm i pgml
```
{% endtab %}

{% tab title="Python " %}
```bash
pip install pgml
```
{% endtab %}
{% endtabs %}

## Getting started

The SDK uses the database to perform most of its functionality. Before continuing, make sure you created a [PostgresML database](https://postgresml.org/signup) and have the `DATABASE_URL` connection string handy.

### Connect to PostgresML

The SDK automatically manages connections to PostgresML. The connection string can be specified as an argument to the collection constructor, or as an environment variable.

If your app follows the twelve-factor convention, we recommend you configure the connection in the environment using the `PGML_DATABASE_URL` variable:

```bash
export PGML_DATABASE_URL=postgres://user:password@sql.cloud.postgresml.org:6432/pgml_database
```

### Create a collection

The SDK is written in asynchronous code, so you need to run it inside an async runtime. Both Python and JavaScript support async functions natively.

{% tabs %}
{% tab title="JavaScript " %}
```javascript
const pgml = require("pgml");

const main = async () => {
const collection = pgml.newCollection("sample_collection");
}
```
{% endtab %}

{% tab title="Python" %}
```python
from pgml import Collection, Pipeline
import asyncio

async def main():
collection = Collection("sample_collection")
```
{% endtab %}
{% endtabs %}

The above example imports the `pgml` module and creates a collection object. By itself, the collection only tracks document contents and identifiers, but once we add a pipeline, we can instruct the SDK to perform additional tasks when documents and are inserted and retrieved.


### Create a pipeline

Continuing the example, we will create a pipeline called `sample_pipeline`, which will use in-database embeddings generation to automatically chunk and embed documents:

{% tabs %}
{% tab title="JavaScript" %}
```javascript
// Add this code to the end of the main function from the above example.
const pipeline = pgml.newPipeline("sample_pipeline", {
text: {
splitter: { model: "recursive_character" },
semantic_search: {
model: "intfloat/e5-small",
},
},
});

await collection.add_pipeline(pipeline);
```
{% endtab %}

{% tab title="Python" %}
```python
# Add this code to the end of the main function from the above example.
pipeline = Pipeline(
"test_pipeline",
{
"text": {
"splitter": { "model": "recursive_character" },
"semantic_search": {
"model": "intfloat/e5-small",
},
},
},
)

await collection.add_pipeline(pipeline)
```
{% endtab %}
{% endtabs %}

The pipeline configuration is a key/value object, where the key is the name of a column in a document, and the value is the action the SDK should perform on that column.

In this example, the documents contain a column called `text` which we are instructing the SDK to chunk the contents of using the recursive character splitter, and to embed those chunks using the Hugging Face `intfloat/e5-small` embeddings model.

### Add documents

Once the pipeline is configured, we can start adding documents:

{% tabs %}
{% tab title="JavaScript" %}
```javascript
// Add this code to the end of the main function from the above example.
const documents = [
{
id: "Document One",
text: "document one contents...",
},
{
id: "Document Two",
text: "document two contents...",
},
];

await collection.upsert_documents(documents);
```
{% endtab %}

{% tab title="Python" %}
```python
# Add this code to the end of the main function in the above example.
documents = [
{
"id": "Document One",
"text": "document one contents...",
},
{
"id": "Document Two",
"text": "document two contents...",
},
]

await collection.upsert_documents(documents)
```
{% endtab %}
{% endtabs %}

If the same document `id` is used, the SDK computes the difference between existing and new documents and only updates the chunks that have changed.

### Search documents

Now that the documents are stored, chunked and embedded, we can start searching the collection:

{% tabs %}
{% tab title="JavaScript" %}
```javascript
// Add this code to the end of the main function in the above example.
const results = await collection.vector_search(
{
query: {
fields: {
text: {
query: "Something about a document...",
},
},
},
limit: 2,
},
pipeline,
);

console.log(results);
```
{% endtab %}

{% tab title="Python" %}
```python
# Add this code to the end of the main function in the above example.
results = await collection.vector_search(
{
"query": {
"fields": {
"text": {
"query": "Something about a document...",
},
},
},
"limit": 2,
},
pipeline,
)

print(results)
```
{% endtab %}
{% endtabs %}

We are using built-in vector search, powered by embeddings and the PostgresML [pgml.embed()](../sql-extension/pgml.embed) function, which embeds the `query` argument, compares it to the embeddings stored in the database, and returns the top two results, ranked by cosine similarity.

### Run the example

* **Automated Database Management**: You can easily handle the management of database tables related to documents, text chunks, text splitters, LLM models, and embeddings. This automated management system simplifies the process of setting up and maintaining your vector search application's data structure.
* **Embedding Generation from Open Source Models**: Provides the ability to generate embeddings using hundreds of open source models. These models, trained on vast amounts of data, capture the semantic meaning of text and enable powerful analysis and search capabilities.
* **Flexible and Scalable Vector Search**: Build flexible and scalable vector search applications. PostgresML seamlessly integrates with PgVector, a PostgreSQL extension specifically designed for handling vector-based indexing and querying. By leveraging these indices, you can perform advanced searches, rank results by relevance, and retrieve accurate and meaningful information from your database.
Since the SDK is using async code, both JavaScript and Python need a little bit of code to run it correctly:

### Use Cases
{% tabs %}
{% tab title="JavaScript" %}
```javascript
main().then(() => {
console.log("SDK example complete");
});
```
{% endtab %}

* Search: Embeddings are commonly used for search functionalities, where results are ranked by relevance to a query string. By comparing the embeddings of query strings and documents, you can retrieve search results in order of their similarity or relevance.
* Clustering: With embeddings, you can group text strings by similarity, enabling clustering of related data. By measuring the similarity between embeddings, you can identify clusters or groups of text strings that share common characteristics.
* Recommendations: Embeddings play a crucial role in recommendation systems. By identifying items with related text strings based on their embeddings, you can provide personalized recommendations to users.
* Anomaly Detection: Anomaly detection involves identifying outliers or anomalies that have little relatedness to the rest of the data. Embeddings can aid in this process by quantifying the similarity between text strings and flagging outliers.
* Classification: Embeddings are utilized in classification tasks, where text strings are classified based on their most similar label. By comparing the embeddings of text strings and labels, you can classify new text strings into predefined categories.
{% tab title="Python" %}
```python
if __name__ == "__main__":
asyncio.run(main())
```
{% endtab %}
{% endtabs %}

### How the SDK Works
Once you run the example, you should see something like this in the terminal:

SDK streamlines the development of vector search applications by abstracting away the complexities of database management and indexing. Here's an overview of how the SDK works:
```bash
[
{
"chunk": "document one contents...",
"document": {"id": "Document One", "text": "document one contents..."},
"score": 0.9034339189529419,
},
{
"chunk": "document two contents...",
"document": {"id": "Document Two", "text": "document two contents..."},
"score": 0.8983734250068665,
},
]
```

* **Automatic Document and Text Chunk Management**: The SDK provides a convenient interface to manage documents and pipelines, automatically handling chunking and embedding for you. You can easily organize and structure your text data within the PostgreSQL database.
* **Open Source Model Integration**: With the SDK, you can seamlessly incorporate a wide range of open source models to generate high-quality embeddings. These models capture the semantic meaning of text and enable powerful analysis and search capabilities.
* **Embedding Indexing**: The Python SDK utilizes the PgVector extension to efficiently index the embeddings generated by the open source models. This indexing process optimizes search performance and allows for fast and accurate retrieval of relevant results.
* **Querying and Search**: Once the embeddings are indexed, you can perform vector-based searches on the documents and text chunks stored in the PostgreSQL database. The SDK provides intuitive methods for executing queries and retrieving search results.

0 comments on commit 2b83a11

Please sign in to comment.