Skip to content

Commit

Permalink
Browse files Browse the repository at this point in the history
  • Loading branch information
vrushankportkey committed May 22, 2024
2 parents 22831a5 + 09bdd76 commit a0fee28
Show file tree
Hide file tree
Showing 21 changed files with 2,040 additions and 685 deletions.
3 changes: 2 additions & 1 deletion .github/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@
</p>
</div>

<i> Resources focus on <b>how to use those features and why they matter</b> rather than what they are and how they work. Are you interested in the latter? Visit <a href="https://portkey.ai/docs">portkey documentation</a>.</i>
Resources focus on <b>how to use those features and why they matter </b> rather than what they are and how they work. Are you interested in the latter? Visit <a href="https://portkey.ai/docs">portkey documentation</a>.

---

Expand All @@ -29,6 +29,7 @@
- [Anyscale and Portkey Integration Guide](./integrations/anyscale-portkey.md)
- [Mistral and Portkey Integration Guide](./integrations/mistral-portkey.md)
- [Use prompts from LangChainHub, Requests through Portkey](./integrations/how-to-use-prompts-from-langchain-hub-and-requests-through-portkey.md)
- [Vercel AI and Portkey Integration Guide](./integrations/vercel-ai-sdk-and-portkey-integration-guide.md)

#### Want to explore more?

Expand Down
15 changes: 8 additions & 7 deletions Cookbooks/101_portkey_gateway_configs.ipynb

Large diffs are not rendered by default.

2 changes: 1 addition & 1 deletion Cookbooks/automatically-retry-requests-to-llms.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@

A sudden timeout or error could harm the user experience and hurt your service's reputation if your application relies on an LLM for a critical feature. To prevent this, it's crucial to have a reliable retry mechanism in place. This will ensure that users are not left frustrated and can depend on your service.

Retrying Requests to Large Langauge Models (LLMs) can significantly increase your Gen AI app' reliability.
Retrying Requests to Large Langauge Models (LLMs) can significantly increase your Gen AI app's reliability.

It can help you handle cases such as:

Expand Down
26 changes: 25 additions & 1 deletion Cookbooks/how-to-setup-fallback-from-openai-to-azure-openai.md
Original file line number Diff line number Diff line change
Expand Up @@ -45,7 +45,7 @@ Next, we will create a configs object that influences the behavior of the reques
}
```

This configuration instructs Portkey to use a \_fallback \_strategy with the requests. The \_targets \_array lists the virtual keys of LLMs in the order Portkey should fallback to an alternative.
This configuration instructs Portkey to use a _fallback_ strategy with the requests. The _targets_ array lists the virtual keys of LLMs in the order Portkey should fallback to an alternative.

Most users find it way more cleaner to define the configs in the Portkey UI and reference the config ID in the code. [Try it out](https://portkey.ai/docs/product/ai-gateway-streamline-llm-integrations/configs#creating-configs).

Expand Down Expand Up @@ -74,6 +74,30 @@ Always reference the credentials from the environment variables to prevent expos

> The Azure OpenAI virtual key only needs to be set up once, and it will then be accessible through Portkey in all subsequent API calls.
<details>

<summary>Fallback Configs without virtual keys</summary>

```json
{
"strategy": {
"mode": "fallback"
},
"targets": [
{
"provider": "openai",
"api_key": "sk-xxxxxxxxpRT4xxxx5"
},
{
"provider": "azure-openai",
"api_key": "*******"
}
]
}
```

</details>

## 3. Make a request

All the requests will hit OpenAI since Portkey proxies all those requests to the target(s) we already specified. Notice that the changes to the requests do not demand any code changes in the business logic implementation. Smooth!
Expand Down
Loading

0 comments on commit a0fee28

Please sign in to comment.