Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion docs/docs/about/community.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ Welcome with a huge coconut hug 🥥⋆。˚🤗.

We are super excited for community contributions of all kinds - whether it's code improvements, documentation updates, issue reports, feature requests on [GitHub](https://github.com/cocoindex-io/cocoindex), and discussions in our [Discord](https://discord.com/invite/zpA9S2DR7s).

We would love to fostering an inclusive, welcoming, and supportive environment. Contributing to CocoIndex should feel collaborative, friendly and enjoyable for everyone. Together, we can build better AI applications through robust data infrastructure.
We would love to foster an inclusive, welcoming, and supportive environment. Contributing to CocoIndex should feel collaborative, friendly and enjoyable for everyone. Together, we can build better AI applications through robust data infrastructure.

:::tip Start hacking CocoIndex
Check out our [Contributing guide](./contributing) to get started!
Expand Down
4 changes: 2 additions & 2 deletions docs/docs/about/contributing.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,10 +18,10 @@ We tag issues with the ["good first issue"](https://github.com/cocoindex-io/coco
## How to Contribute
- If you decide to work on an issue, unless the PR can be sent immediately (e.g. just a few lines of code), we recommend you to leave a comment on the issue like **`I'm working on it`** or **`Can I work on this issue?`** to avoid duplicating work.
- For larger features, we recommend you to discuss with us first in our [Discord server](https://discord.com/invite/zpA9S2DR7s) to coordinate the design and work.
- Our [Discord server](https://discord.com/invite/zpA9S2DR7s) are constantly open. If you are unsure about anything, it is a good place to discuss! We'd love to collaborate and will always be friendly.
- Our [Discord server](https://discord.com/invite/zpA9S2DR7s) is constantly open. If you are unsure about anything, it is a good place to discuss! We'd love to collaborate and will always be friendly.

## Start hacking! Setting Up Development Environment
Following the steps below to get cocoindex build on latest codebase locally - if you are making changes to cocoindex funcionality and want to test it out.
Follow the steps below to get cocoindex built on the latest codebase locally - if you are making changes to cocoindex functionality and want to test it out.

- 🦀 [Install Rust](https://rust-lang.org/tools/install)

Expand Down
4 changes: 2 additions & 2 deletions docs/docs/core/flow_def.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -468,7 +468,7 @@ Then reference it when building a spec that takes an auth entry:
Note that CocoIndex backends use the key of an auth entry to identify the backend.

* Keep the key stable.
If the key doesn't change, it's considered to be the same backend (even if the underlying way to connect/authenticate change).
If the key doesn't change, it's considered to be the same backend (even if the underlying way to connect/authenticate changes).

* If a key is no longer referenced in any operation spec, keep it until the next flow setup / drop action,
so that cocoindex will be able to clean up the backends.
so that CocoIndex will be able to clean up the backends.
8 changes: 4 additions & 4 deletions docs/docs/core/flow_methods.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -44,9 +44,9 @@ For a flow, its persistent backends need to be ready before it can run, includin
The desired state of the backends for a flow is derived based on the flow definition itself.
CocoIndex supports two types of actions to manage the persistent backends automatically:

* *Setup* a flow, which will change the backends owned by the flow to a state to the desired state, e.g. create new tables for new flow, drop an existing table if the corresponding target is gone, add new column to a target table if a new field is collected, etc. It's no-op if the backend states are already in the desired state.
* *Setup* a flow, which will change the backends owned by the flow to the desired state, e.g. create new tables for new flow, drop an existing table if the corresponding target is gone, add new column to a target table if a new field is collected, etc. It's no-op if the backend states are already in the desired state.

* *Drop* a flow, which will drop all backends owned by the flow. It's no-op if there's no existing backends owned by the flow (e.g. never setup or already dropped).
* *Drop* a flow, which will drop all backends owned by the flow. It's no-op if there are no existing backends owned by the flow (e.g. never setup or already dropped).

### CLI

Expand Down Expand Up @@ -138,7 +138,7 @@ This is to achieve best efficiency.

The `cocoindex update` subcommand creates/updates data in the target.

Once it's done, the target data is fresh up to the moment when the function is called.
Once it's done, the target data is fresh up to the moment when the command is called.

```sh
cocoindex update main.py
Expand Down Expand Up @@ -203,7 +203,7 @@ To perform live update, run the `cocoindex update` subcommand with `-L` option:
cocoindex update main.py -L
```

If there's at least one data source with change capture mechanism enabled, it will keep running until the aborted (e.g. by `Ctrl-C`).
If there's at least one data source with change capture mechanism enabled, it will keep running until aborted (e.g. by `Ctrl-C`).
Otherwise, it falls back to the same behavior as one time update, and will finish after a one-time update is done.

With a `--setup` option, it will also setup the flow first if needed.
Expand Down
12 changes: 6 additions & 6 deletions docs/docs/getting_started/quickstart.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,10 +7,10 @@ import ReactPlayer from 'react-player'

# Build your first CocoIndex project

This guide will help you get up and running with CocoIndex in just a few minutes, that does:
This guide will help you get up and running with CocoIndex in just a few minutes. We'll build a project that does:
* Read files from a directory
* Perform basic chunking and embedding
* loads the data into a vector store (PG Vector)
* Load the data into a vector store (PG Vector)

<ReactPlayer controls url='https://www.youtube.com/watch?v=gv5R8nOXsWU' />

Expand Down Expand Up @@ -107,11 +107,11 @@ Notes:
3. A *data source* extracts data from an external source.
In this example, the `LocalFile` data source imports local files as a KTable (table with a key field, see [KTable](../core/data_types#ktable) for details), each row has `"filename"` and `"content"` fields.

4. After defining the KTable, we extended a new field `"chunks"` to each row by *transforming* the `"content"` field using `SplitRecursively`. The output of the `SplitRecursively` is also a KTable representing each chunk of the document, with `"location"` and `"text"` fields.
4. After defining the KTable, we extend a new field `"chunks"` to each row by *transforming* the `"content"` field using `SplitRecursively`. The output of the `SplitRecursively` is also a KTable representing each chunk of the document, with `"location"` and `"text"` fields.

5. After defining the KTable, we extended a new field `"embedding"` to each row by *transforming* the `"text"` field using `SentenceTransformerEmbed`.
5. After defining the KTable, we extend a new field `"embedding"` to each row by *transforming* the `"text"` field using `SentenceTransformerEmbed`.

6. In CocoIndex, a *collector* collects multiple entries of data together. In this example, the `doc_embeddings` collector collects data from all `chunk`s across all `doc`s, and using the collected data to build a vector index `"doc_embeddings"`, using `Postgres`.
6. In CocoIndex, a *collector* collects multiple entries of data together. In this example, the `doc_embeddings` collector collects data from all `chunk`s across all `doc`s, and uses the collected data to build a vector index `"doc_embeddings"`, using `Postgres`.

## Step 3: Run the indexing pipeline and queries

Expand Down Expand Up @@ -271,7 +271,7 @@ Now we can run the same Python file, which will run the new added main logic:
python quickstart.py
```

It will ask you to enter a query and it will return the top 10 results.
It will ask you to enter a query and it will return the top 5 results.

## Next Steps

Expand Down
13 changes: 6 additions & 7 deletions docs/docs/ops/sources.md
Original file line number Diff line number Diff line change
Expand Up @@ -111,10 +111,9 @@ This is how to setup:

* In the [Amazon S3 Console](https://s3.console.aws.amazon.com/s3/home), open your S3 bucket. Under *Properties* tab, click *Create event notification*.
* Fill in an arbitrary event name, e.g. `S3ChangeNotifications`.
* If you want your AmazonS3 data source expose a subset of files sharing a prefix, set the same prefix here. Otherwise, leave it empty.
* If you want your AmazonS3 data source to expose a subset of files sharing a prefix, set the same prefix here. Otherwise, leave it empty.
* Select the following event types: *All object create events*, *All object removal events*.
* Select *SQS queue* as the destination, and specify the SQS queue you created above.
and enable *Change Event Notifications* for your bucket, and specify the SQS queue as the destination.

AWS's [Guide of Configuring a Bucket for Notifications](https://docs.aws.amazon.com/AmazonS3/latest/userguide/ways-to-add-notification-config-to-bucket.html#step1-create-sqs-queue-for-notification) provides more details.

Expand All @@ -141,7 +140,7 @@ The spec takes the following fields:
:::info

We will delete messages from the queue after they're processed.
If there're unrelated messages in the queue (e.g. test messages that SQS will send automatically on queue creation, messages for a different bucket, for non-included files, etc.), we will delete the message upon receiving it, to avoid keeping receiving irrelevant messages again and again after they're redelivered.
If there are unrelated messages in the queue (e.g. test messages that SQS will send automatically on queue creation, messages for a different bucket, for non-included files, etc.), we will delete the message upon receiving it, to avoid repeatedly receiving irrelevant messages after they're redelivered.

:::

Expand Down Expand Up @@ -253,12 +252,12 @@ The spec takes the following fields:
it's typically cheaper than a full refresh by setting the [refresh interval](../core/flow_def#refresh-interval) especially when the folder contains a large number of files.
So you can usually set it with a smaller value compared to the `refresh_interval`.

On the other hand, this only detects changes for files still exists.
If the file is deleted (or the current account no longer has access to), this change will not be detected by this change stream.
On the other hand, this only detects changes for files that still exist.
If the file is deleted (or the current account no longer has access to it), this change will not be detected by this change stream.

So when a `GoogleDrive` source enabled `recent_changes_poll_interval`, it's still recommended to set a `refresh_interval`, with a larger value.
So when a `GoogleDrive` source has `recent_changes_poll_interval` enabled, it's still recommended to set a `refresh_interval`, with a larger value.
So that most changes can be covered by polling recent changes (with low latency, like 10 seconds), and remaining changes (files no longer exist or accessible) will still be covered (with a higher latency, like 5 minutes, and should be larger if you have a huge number of files like 1M).
In reality, configure them based on your requirement: how freshness do you need to target index to be?
In reality, configure them based on your requirement: how fresh do you need the target index to be?

:::

Expand Down
2 changes: 1 addition & 1 deletion docs/docs/ops/targets.md
Original file line number Diff line number Diff line change
Expand Up @@ -413,7 +413,7 @@ If you don't have a Neo4j database, you can start a Neo4j database using our doc
docker compose -f <(curl -L https://raw.githubusercontent.com/cocoindex-io/cocoindex/refs/heads/main/dev/neo4j.yaml) up -d
```

If will bring up a Neo4j instance, which can be accessed by username `neo4j` and password `cocoindex`.
This will bring up a Neo4j instance, which can be accessed by username `neo4j` and password `cocoindex`.
You can access the Neo4j browser at [http://localhost:7474](http://localhost:7474).

:::warning
Expand Down