Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
12 changes: 6 additions & 6 deletions docs/20-dev-env/1-dev-env-setup.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ import Screenshot from "@site/src/components/Screenshot";

<Screenshot url="https://play.instruqt.com" src="img/screenshots/20-dev-env/1-dev-env-setup/instruqt/1-resume-sandbox.png" alt="Resume sandbox" />

In the Explorer menu, navigate to `genai-devday-notebooks` > `notebooks` > `ai-agents-lab.ipynb` This is the Jupyter Notebook you will be using throughout this lab.
In the Explorer menu, navigate to `genai-devday-notebooks` > `labs` > `ai-agents-lab.ipynb` This is the Jupyter Notebook you will be using throughout this lab.

<Screenshot url="https://play.instruqt.com" src="img/screenshots/20-dev-env/1-dev-env-setup/instruqt/2-nav-notebook.png" alt="Navigate to the notebook" />

Expand All @@ -29,7 +29,7 @@ import Screenshot from "@site/src/components/Screenshot";

<Screenshot url="https://github.com/codespaces" src="img/screenshots/20-dev-env/1-dev-env-setup/codespaces/1-resume-codespace.png" alt="Resume codespace" />

Give the codespace a few seconds to restart. When files appear in the Explorer tab, click on the file named `ai-agents-lab.ipynb` under `notebooks`. This is the Jupyter Notebook you will be using throughout this lab.
Give the codespace a few seconds to restart. When files appear in the Explorer tab, click on the file named `ai-agents-lab.ipynb` under `labs`. This is the Jupyter Notebook you will be using throughout this lab.

<Screenshot url="https://github.com/codespaces" src="img/screenshots/20-dev-env/1-dev-env-setup/codespaces/2-nav-notebook.png" alt="Navigate to the notebook" />
</TabItem>
Expand Down Expand Up @@ -89,7 +89,7 @@ You will also see the default databases in the cluster appear under **Connection

You will be filling code in a Jupyter Notebook during this lab, so let's get set up with that next!

Within the sandbox, click on the files icon in the left navigation bar of the IDE. In the Explorer menu, navigate to `genai-devday-notebooks` > `notebooks` > `ai-agents-lab.ipynb` to open the Jupyter Notebook for this lab.
Within the sandbox, click on the files icon in the left navigation bar of the IDE. In the Explorer menu, navigate to `genai-devday-notebooks` > `labs` > `ai-agents-lab.ipynb` to open the Jupyter Notebook for this lab.

<Screenshot url="https://play.instruqt.com" src="img/screenshots/20-dev-env/1-dev-env-setup/instruqt/2-nav-notebook.png" alt="Navigate to the notebook" />

Expand Down Expand Up @@ -143,7 +143,7 @@ You will also see the default databases in the cluster appear under **Connection

You will be filling code in a Jupyter Notebook during this lab, so let's get set up with that next!

Within the codespace, click on the files icon in the left navigation bar of the IDE. In the Explorer menu, under `notebooks`, click on the file named `ai-agents-lab.ipynb` to open the Jupyter Notebook for this lab.
Within the codespace, click on the files icon in the left navigation bar of the IDE. In the Explorer menu, under `labs`, click on the file named `ai-agents-lab.ipynb` to open the Jupyter Notebook for this lab.

<Screenshot url="https://github.com/codespaces" src="img/screenshots/20-dev-env/1-dev-env-setup/codespaces/2-nav-notebook.png" alt="Navigate to the notebook" />

Expand All @@ -161,10 +161,10 @@ To run the lab locally, follow the steps below:
git clone https://github.com/mongodb-developer/genai-devday-notebooks.git
```

* `cd` into the `notebooks` directory of the cloned repository:
* `cd` into the `labs` directory of the cloned repository:

```
cd genai-devday-notebooks/notebooks
cd genai-devday-notebooks/labs
```

* Create and activate a Python virtual environment:
Expand Down
44 changes: 31 additions & 13 deletions docs/20-dev-env/2-setup-pre-reqs.mdx
Original file line number Diff line number Diff line change
@@ -1,26 +1,44 @@
# 👐 Setup prerequisites

Select the LLM provider recommended by your instructor, and run the cells under the **Step 1: Setup prerequisites** section in the notebook.
Set the passkey and LLM provider provided by your instructor, and run the cells under the **Step 1: Setup prerequisites** section in the notebook.

:::info
### Expired passkey OR don't have a passkey

Additional steps **if you are running the lab locally**:
Passkeys are provided to you at MongoDB Developer Days to easily get API keys for LLM and embedding APIs that are used in the workshop. These passkeys are valid for 3 days after the workshop.

* Spin up a MongoDB Atlas cluster and obtain its connection string:
Once the passkey expires, or if you weren't at a MongoDB Developer Day recently, you will need to obtain the following API keys for the workshop:

* Register for a [free MongoDB Atlas account](https://www.mongodb.com/cloud/atlas/register) if you don't already have one
* [Create a new database cluster](https://www.mongodb.com/docs/guides/atlas/cluster)
* [Obtain the connection string](https://www.mongodb.com/docs/guides/atlas/connection-string) for your database cluster

* Set the `MONGODB_URI` variable to the connection string for your cluster as follows:
**Voyage AI**
* Follow the steps here to [obtain a Voyage AI API key](https://docs.voyageai.com/docs/api-key-and-installation#authentication-with-api-keys).
* Set the `VOYAGE_API_KEY` environment variable in the notebook as follows:

```python
MONGODB_URI = "<your_connection_string>"
os.environ["VOYAGE_API_KEY"] = "your-voyageai-api-key"
```

* Manually set the value of the `SERVERLESS_URL` variable as follows:
**Gemini**
* Set the `LLM_PROVIDER` in the notebook to "google"

* Obtain a Gemini API key from [here](https://aistudio.google.com/app/apikey).

* Set the `GOOGLE_API_KEY` environment variable in the notebook as follows:

```python
SERVERLESS_URL = "https://vtqjvgchmwcjwsrela2oyhlegu0hwqnw.lambda-url.us-west-2.on.aws/"
os.environ["GOOGLE_API_KEY"] = "your-google-api-key"
```
:::

### If you are running the lab locally

If you aren't using Instruqt or GitHub Codespaces to run the lab and instead running it locally, you will need to do the following additional steps:

* Spin up a free MongoDB Atlas cluster and obtain its connection string:

* Register for a [free MongoDB Atlas account](https://www.mongodb.com/cloud/atlas/register) if you don't already have one
* [Create a new database cluster](https://www.mongodb.com/docs/guides/atlas/cluster)
* [Obtain the connection string](https://www.mongodb.com/docs/guides/atlas/connection-string) for your database cluster

* Set the `MONGODB_URI` variable in the notebook as follows:

```python
MONGODB_URI = "your_connection_string"
```
13 changes: 0 additions & 13 deletions docs/40-agent-tools/1-lecture-notes.mdx
Original file line number Diff line number Diff line change
@@ -1,18 +1,5 @@
# 📘 Lecture notes

## About the data

In this lab, we are using a serverless function to import the data required by the agent's tools, into MongoDB. If you want to do this on your own, these datasets are available on Hugging Face:

* [mongodb-docs](https://huggingface.co/datasets/MongoDB/mongodb-docs): Markdown versions of a small subset of MongoDB's technical documentation. This dataset is imported into a collection called `full_docs`.

* [mongodb-docs-embedded](https://huggingface.co/datasets/MongoDB/mongodb-docs-embedded): Chunked and embedded versions of the articles in the `mongodb-docs` dataset. This dataset is imported into a collection called `chunked_docs`.

To learn more about chunking and embedding, here are some resources from our Developer Center:

* [How to Choose the Right Chunking Strategy for Your LLM Application](https://www.mongodb.com/developer/products/atlas/choosing-chunking-strategy-rag/?utm_campaign=devrel&utm_medium=ai-agents-devday-workshop&utm_term=apoorva.joshi)
* [How to Choose the Best Embedding Model for Your LLM Application](https://www.mongodb.com/developer/products/atlas/choose-embedding-model-rag/?utm_campaign=devrel&utm_medium=ai-agents-devday-workshop&utm_term=apoorva.joshi)

## Tool calling

Tool calling, interchangeably called function calling allows an LLM to use external tools such as APIs, databases, specialized machine learning models etc.
Expand Down
17 changes: 1 addition & 16 deletions docs/40-agent-tools/2-create-vector-search-index.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -2,19 +2,4 @@

To retrieve documents using vector search, you must configure a vector search index on the collection you want to perform vector search against.

Fill in any `<CODE_BLOCK_N>` placeholders and run the cells under the **Step 3: Create a vector search index** section in the notebook to create a vector search index.

The answers for code blocks in this section are as follows:

**CODE_BLOCK_1**

<details>
<summary>Answer</summary>
<div>

```python
create_index(vs_collection, VS_INDEX_NAME, model)
```

</div>
</details>
Run the cells under the **Step 3: Create a vector search index** section in the notebook to create a vector search index.
15 changes: 13 additions & 2 deletions docs/40-agent-tools/3-create-agent-tools.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -14,13 +14,24 @@ The answers for code blocks in this section are as follows:

## Vector search tool

**CODE_BLOCK_1**

<details>
<summary>Answer</summary>
<div>
```python
vo.contextualized_embed(inputs=[[query]], model="voyage-context-3", input_type="query")
```
</div>
</details>

**CODE_BLOCK_2**

<details>
<summary>Answer</summary>
<div>
```python
embedding_model.encode(text)
embds_obj.results[0].embeddings[0]
```
</div>
</details>
Expand All @@ -31,7 +42,7 @@ embedding_model.encode(text)
<summary>Answer</summary>
<div>
```python
get_embedding(user_query)
get_embeddings(user_query)
```
</div>
</details>
Expand Down
4 changes: 1 addition & 3 deletions docs/50-create-agent/1-lecture-notes.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -17,9 +17,7 @@ To learn more about these concepts, refer to the [LangGraph docs](https://langch

## Using different LLM providers with LangChain

LangChain supports different LLM providers for you to build AI applications with. Unless you are using open-source models, you typically need to obtain API keys to use the chat completion APIs offered by different LLM providers.

For this lab, we have created a serverless function that creates LLM objects for Amazon, Google and Microsoft models that you can use with LangChain and LangGraph without having to obtain API keys. However, if you would like to do this on your own, here are some resources:
LangChain supports different LLM providers for you to build AI applications with. For this lab, we have created a utility function that generates LangChain LLM objects based on your chosen provider (Amazon, Google or Microsoft). However, if you would like to do this on your own, here are some resources:

* [Using Amazon Bedrock LLMs with LangChain](https://python.langchain.com/docs/integrations/llms/bedrock/)

Expand Down
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.