Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
24 changes: 24 additions & 0 deletions src/oss/javascript/integrations/chat/ibm.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -150,6 +150,30 @@ Note:
- You must provide `spaceId`, `projectId` or `idOrName`(deployment id) unless you use lighweight engine which works without specifying either (refer to [watsonx.ai docs](https://www.ibm.com/docs/en/cloud-paks/cp-data/5.0.x?topic=install-choosing-installation-mode))
- Depending on the region of your provisioned service instance, use correct serviceUrl.

### Using Model Gateway

```typescript
import { ChatWatsonx } from "@langchain/community/chat_models/ibm";
const props = {
maxTokens: 200,
temperature: 0.5
};

const instance = new ChatWatsonx({
version: "YYYY-MM-DD",
serviceUrl: process.env.API_URL,
model: "<ALIAS_MODEL_ID>",
modelGateway: true,
...props
});
```

To use model gateway with Langchain, you need to previously create a provider and add model via `@ibm-cloud/watsonx-ai` SDK or `watsonx.ai` API. Follow this documentation:
- [API](https://cloud.ibm.com/apidocs/watsonx-ai#create-watsonxai-provider).
- [SDK](https://ibm.github.io/watsonx-ai-node-sdk/modules/1_7_x.gateway.html).



## Invocation

```javascript
Expand Down
26 changes: 26 additions & 0 deletions src/oss/javascript/integrations/llms/ibm.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -149,6 +149,32 @@ Note:
- Depending on the region of your provisioned service instance, use correct serviceUrl.
- You need to specify the model you want to use for inferencing through model_id.

### Using Model Gateway
```typescript
import { WatsonxLLM } from "@langchain/community/llms/ibm";

const props = {
decoding_method: "sample",
maxNewTokens: 100,
minNewTokens: 1,
temperature: 0.5,
topK: 50,
topP: 1,
};

const instance = new WatsonxLLM({
version: "YYYY-MM-DD",
serviceUrl: process.env.API_URL,
model: "<MODEL_ID>",
modelGateway: true,
...props,
});
```

To use model gateway with Langchain, you need to previously create a provider and add model via `@ibm-cloud/watsonx-ai` SDK or `watsonx.ai` API. Follow this documentation:
- [API](https://cloud.ibm.com/apidocs/watsonx-ai#create-watsonxai-provider).
- [SDK](https://ibm.github.io/watsonx-ai-node-sdk/modules/1_7_x.gateway.html).

## Invocation and generation

```javascript
Expand Down
12 changes: 12 additions & 0 deletions src/oss/javascript/integrations/text_embedding/ibm.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -138,6 +138,18 @@ Note:
- You must provide `spaceId` or `projectId` in order to proceed.
- Depending on the region of your provisioned service instance, use correct serviceUrl.

### Model Gateway
```typescript
import { WatsonxEmbeddings } from "@langchain/community/embeddings/ibm";

const instance = new WatsonxEmbeddings({
version: "YYYY-MM-DD",
serviceUrl: process.env.API_URL,
model: "<ALIAS_MODEL_ID>",
modelGateway: true,
});
```

## Indexing and Retrieval

Embedding models are often used in retrieval-augmented generation (RAG) flows, both as part of indexing data as well as later retrieving it. For more detailed instructions, please see our RAG tutorials under the [**Learn** tab](/oss/learn/).
Expand Down