Skip to content

Integration: Bytez Chat Model Provider #1175

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 16 commits into
base: main
Choose a base branch
from

Conversation

inf3rnus
Copy link

Description

Hi all! 👋

We're Bytez, the largest model provider on the internet! We may also be one of the cheapest if not the cheapest.

We'd love to integrate with PortKey. Please see the changed files and let me know if anything needs to change.

I'd like to point out that we do a check against our api to see if a model is a "chat" model. This is stored in a simple cache that is just an object. If that's going to be a problem due to having an unbounded ceiling in terms of memory utilization pls lmk, and I will convert it to a LRU with 100 entries.

Our API's input signature is a bit more bespoke than other model providers, please lmk if the custom requestHandler I have is sufficient, or if there's an easier way to do what I've done.

Bonus feedback: Ya'll need an integration guide! Would be immensely useful 😄

Motivation

We'd love to be integrated in PortKey!

Type of Change

  • Bug fix (non-breaking change which fixes an issue)
  • [x ] New feature (non-breaking change which adds functionality)
  • Breaking change (fix or feature that would cause existing functionality to not work as expected)
  • Documentation update
  • Refactoring (no functional changes)

How Has This Been Tested?

  • Unit Tests
  • Integration Tests
  • [ x] Manual Testing

Screenshots (if applicable)

Checklist

  • [ x] My code follows the style guidelines of this project
  • [ x] I have performed a self-review of my own code
  • [ x] I have commented my code, particularly in hard-to-understand areas
  • I have made corresponding changes to the documentation
  • [ x] My changes generate no new warnings
  • I have added tests that prove my fix is effective or that my feature works
  • New and existing unit tests pass locally with my changes

Related Issues

Copy link

matter-code-review bot commented Jun 25, 2025

Code Quality new feature bug fix performance reliability architecture error-handling documentation

Description

🔄 What Changed

This Pull Request introduces significant enhancements across the Portkey AI Gateway, focusing on new model provider integrations, improved robustness, and expanded feature support. Key changes include:

  • New Provider Integrations: Added support for Featherless AI and Krutrim chat models, and updated the package.json version to 1.10.1.
  • Enhanced Plugin System: Introduced a new requiredMetadataKeys plugin for beforeRequestHook to enforce metadata presence. Improved the webhook plugin's error handling to include response body and handle TimeoutError. Refactored Portkey guardrail plugins (gibberish, language, moderateContent, pii) to use getCurrentContentPart for more robust content extraction and to propagate detailed LogObjects for better observability.
  • Robust Request Routing: Implemented advanced circuit breaker logic within tryTargetsRecursively to filter healthy targets and record failures/successes, significantly improving reliability. Corrected currentJsonPath indexing for accurate target tracking.
  • Improved Streaming Handling: Added try...catch...finally blocks to stream reading loops in streamHandler.ts to ensure proper resource cleanup and error logging during streaming operations.
  • Provider-Specific Enhancements:
    • Anthropic: Enhanced tool content handling and introduced transformFinishReason for consistent stop reasons.
    • Azure AI Inference: Expanded API support to include batch operations (getBatchOutput), file uploads, and various new endpoints, with updated header and endpoint generation logic.
    • Bedrock: Refactored embedding configurations to support multi-modal inputs (text, image) for Cohere and Titan models. Improved tool handling and detailed usage reporting (including cache tokens) in chat completions. Implemented transformFinishReason for Bedrock stop reasons.
    • Cohere: Updated embedding configuration for multi-modal inputs.
    • Google Vertex AI: Added support for global region and improved stream state usage tracking.
    • OpenAI Base/Groq: Added web_search_options and service_tier parameters respectively.
  • Type Safety & Observability: Introduced new types for LogObject, FINISH_REASON, and PROVIDER_FINISH_REASON, along with a finishReasonMap for standardized output.

🔍 Impact of the Change

These changes significantly expand the gateway's capabilities by integrating new AI models and enhancing existing provider interactions. The improvements in error handling, circuit breaking, and logging contribute to a more reliable, observable, and maintainable system. The refined plugin architecture offers greater flexibility and control over request/response flows, while multi-modal embedding support broadens the range of applications.

📁 Total Files Changed

  • cookbook/integrations/Sutra_with_Portkey.ipynb: Added a new Jupyter notebook demonstrating Sutra-v2 integration with Portkey's AI Gateway, including basic usage, multilingual capabilities, creative writing, retries, and caching.
  • package.json: Updated the project version from 1.9.19 to 1.10.1.
  • plugins/default/requiredMetadataKeys.ts: New plugin added to enforce the presence of specific metadata keys in beforeRequestHook.
  • plugins/default/webhook.ts: Modified webhook plugin to export TimeoutError and include response body/headers in error data for non-OK responses.
  • plugins/index.ts: Registered the new defaultrequiredMetadataKeys plugin.
  • plugins/portkey/gibberish.ts: Updated to use getCurrentContentPart for text extraction and added a check for empty content.
  • plugins/portkey/globals.ts: Introduced LogObject interfaces and refactored fetchPortkey to return detailed LogObjects for both success and error cases, enhancing logging and tracing.
  • plugins/portkey/language.ts: Updated to use getCurrentContentPart for text extraction and added a check for empty content.
  • plugins/portkey/moderateContent.ts: Updated to use getCurrentContentPart for text extraction and added a check for empty content.
  • plugins/portkey/pii.ts: Modified detectPII to return LogObject and propagated this log through the plugin handler response.
  • plugins/types.ts: Added metadata to PluginContext and fail_on_error to GuardrailCheckResult.
  • plugins/utils.ts: Exported TimeoutError and added headers to ErrorResponse for more comprehensive error details.
  • src/globals.ts: Added FEATHERLESS_AI and KRUTRIM to the list of valid providers.
  • src/handlers/handlerUtils.ts: Enhanced tryTargetsRecursively to incorporate circuit breaker logic, filter healthy targets, and correctly handle originalIndex for target paths. Added cb_config to inherited configuration.
  • src/handlers/streamHandler.ts: Wrapped stream reading loops in try...catch...finally to ensure writer.close() is called, preventing resource leaks on errors.
  • src/middlewares/hooks/index.ts: Adjusted hook verdict logic to respect the fail_on_error flag from plugin results.
  • src/middlewares/hooks/types.ts: Added fail_on_error property to GuardrailCheckResult.
  • src/providers/anthropic/chatComplete.ts: Improved AnthropicToolResultContentItem and tool transformation, added cache_control support, and integrated transformFinishReason for stop reasons.
  • src/providers/anthropic/types.ts: Defined ANTHROPIC_STOP_REASON enum.
  • src/providers/azure-ai-inference/api.ts: Expanded getBaseURL, headers, and getEndpoint to support various Azure AI Inference endpoints including batch and file operations, and multipart/form-data content type.
  • src/providers/azure-ai-inference/getBatchOutput.ts: New file implementing logic to retrieve batch output by fetching batch details and then the associated file content.
  • src/providers/azure-ai-inference/index.ts: Integrated new API endpoints, request handlers, and response transforms for Azure AI Inference.
  • src/providers/azure-ai-inference/utils.ts: New file providing response transformation utilities for Azure AI Inference endpoints.
  • src/providers/azure-openai/chatComplete.ts: Added web_search_options parameter.
  • src/providers/bedrock/chatComplete.ts: Enhanced tool handling, integrated BEDROCK_STOP_REASON and transformFinishReason, and improved usage reporting to include cache tokens.
  • src/providers/bedrock/embed.ts: Significantly refactored Bedrock Cohere and Titan embedding configurations to support multi-modal inputs (text, image) and various embedding parameters.
  • src/providers/bedrock/types.ts: Defined BEDROCK_STOP_REASON enum.
  • src/providers/bedrock/utils.ts: Enhanced transformAnthropicAdditionalModelRequestFields for generic tools, improved getInferenceProfile for assumed roles, and added getBedrockErrorChunk for streaming errors.
  • src/providers/cohere/embed.ts: Refactored Cohere embedding configuration to support multi-modal inputs.
  • src/providers/featherless-ai/api.ts: New file defining API configuration for Featherless AI.
  • src/providers/featherless-ai/index.ts: New file integrating Featherless AI as a chat completion provider.
  • src/providers/google-vertex-ai/api.ts: Added support for global region in Google Vertex AI base URL.
  • src/providers/google-vertex-ai/chatComplete.ts: Improved stream state usage tracking for prompt and total tokens.
  • src/providers/groq/index.ts: Added service_tier parameter to Groq chat completion configuration.
  • src/providers/index.ts: Registered the new featherless-ai and krutrim providers.
  • src/providers/jina/embed.ts: Added dimensions parameter to Jina embedding configuration.
  • src/providers/krutrim/api.ts: New file defining API configuration for Krutrim.
  • src/providers/krutrim/chatComplete.ts: New file defining response transformation for Krutrim chat completions, including custom error handling.
  • src/providers/krutrim/index.ts: New file integrating Krutrim as a chat completion provider.
  • src/providers/open-ai-base/createModelResponse.ts: Added modalities and parallel_tool_calls parameters.
  • src/providers/open-ai-base/index.ts: Added web_search_options to base chat completion parameters.
  • src/providers/openai/chatComplete.ts: Added web_search_options parameter.
  • src/providers/types.ts: Expanded CResponse with prompt_tokens_details, completion_tokens_details, and introduced FINISH_REASON and PROVIDER_FINISH_REASON types.
  • src/providers/utils.ts: New transformFinishReason utility function to standardize provider stop reasons to OpenAI's FINISH_REASON.
  • src/providers/utils/finishReasonMap.ts: New file containing a mapping of Anthropic and Bedrock stop reasons to OpenAI FINISH_REASON.
  • src/public/index.html: Minor formatting and URL updates in example code snippets.
  • src/types/requestBody.ts: Added originalIndex to Targets and simplified the Tool interface to support generic tool types.

🧪 Test Added

Given the nature and scope of changes, the following tests are likely to have been added or updated:

  • Unit Tests: For new utility functions like transformFinishReason and getBedrockErrorChunk. For the logic within requiredMetadataKeys plugin, ensuring correct verdict based on operator and metadata keys. For the enhanced error handling in webhook.ts and fetchPortkey in globals.ts.
  • Integration Tests: For each new provider (Featherless AI, Krutrim) to verify successful chat completions. For Azure AI Inference's new endpoints (batch output, file uploads, audio APIs) to ensure correct request transformation and response handling. For Bedrock and Anthropic's enhanced tool support and finish_reason mapping. For the circuit breaker logic in handlerUtils.ts to test fallback and retry scenarios.
  • Manual Testing: The cookbook/integrations/Sutra_with_Portkey.ipynb notebook serves as a manual integration test and example for Sutra-v2, demonstrating basic chat, multilingual, creative writing, retries, and caching through Portkey. Similar manual tests would have been performed for other new integrations and features.

🔒Security Vulnerabilities

No direct security vulnerabilities were detected in the provided patch. The changes, particularly the improved error logging (LogObject in globals.ts) and the robust circuit breaker implementation (handlerUtils.ts), contribute to better system resilience and observability, which indirectly aids in identifying and mitigating potential security issues more effectively.

Motivation

This PR aims to expand the Portkey AI Gateway's compatibility with a wider range of LLM providers, enhance its core routing and plugin functionalities for improved reliability and observability, and provide more flexible and standardized interactions with various AI models.

Type of Change

  • New feature (non-breaking change which adds functionality)
  • Bug fix (non-breaking change which fixes an issue)
  • Breaking change (fix or feature that would cause existing functionality to not work as expected)
  • Documentation update
  • Refactoring (no functional changes)

How Has This Been Tested?

  • Unit Tests
  • Integration Tests
  • Manual Testing

Screenshots (if applicable)

N/A

Checklist

  • My code follows the style guidelines of this project
  • I have performed a self-review of my own code
  • I have commented my code, particularly in hard-to-understand areas
  • I have made corresponding changes to the documentation
  • My changes generate no new warnings
  • I have added tests that prove my fix is effective or that my feature works
  • New and existing unit tests pass locally with my changes

Related Issues

N/A

Tip

Quality Recommendations

  1. The cookbook/integrations/Sutra_with_Portkey.ipynb notebook contains a typo where the API key URLs for Sutra and Portkey are swapped. This should be corrected to ensure users get the correct keys from the right sources.

  2. The PR description mentions a simple object cache for Bytez model types. If this cache is not bounded (e.g., an LRU cache), it could lead to unbounded memory growth in a long-running gateway service. It is highly recommended to implement a bounded cache with a reasonable size limit to prevent potential memory exhaustion.

  3. While transformFinishReason is a valuable addition for standardizing output, ensure that all existing and future providers with custom stop reasons are consistently mapped to the FINISH_REASON enum for full OpenAI compliance and consistent behavior across the gateway.

Sequence Diagram

Loading
sequenceDiagram
    participant U as User
    participant PG as Portkey Gateway
    participant HM as Hooks Manager
    participant PP as Portkey Plugins
    participant PIA as Portkey Internal API
    participant RM as Router Module
    participant CB as Circuit Breaker Module
    participant SH as Stream Handler
    participant LLM as LLM Provider
    participant FAI as Featherless AI
    participant K as Krutrim
    participant AAI as Azure AI Inference
    participant B as Bedrock
    participant SA as Sutra API

    Note over U,PG: User initiates API request

    U->>+PG: API Request (e.g., POST /v1/chat/completions)
    PG->>+HM: executeHooks(eventType='beforeRequestHook', context, ...)
    Note over HM: New: `fail_on_error` logic for plugin verdicts

    HM->>+PP: handler(context, parameters, eventType)
    Note over PP: New plugin: `requiredMetadataKeys` added
    Note over PP: `webhook` plugin: improved error data (response body, timeout error)
    Note over PP: `gibberish`, `language`, `moderateContent`, `pii` plugins:
    PP->>PP: getCurrentContentPart(context, eventType) for robust content extraction
    PP->>+PIA: fetchPortkey(endpoint, credentials, data, timeout)
    Note over PIA: New: Returns `LogObject` on success/failure for observability
    PIA-->>-PP: Response & LogObject
    PP-->>-HM: PluginHandlerResponse(verdict, transformedData, log, fail_on_error)

    HM-->>-PG: Hook Results

    PG->>+RM: tryTargetsRecursively(c, target, request, ...)
    Note over RM: New: `id` for inherited config, `cb_config` handling for circuit breaker
    RM->>RM: Filter healthy targets for circuit breaker
    RM->>+LLM: API Call (transformed request)
    Note over LLM: New Providers: Featherless AI, Krutrim added
    Note right of LLM: Anthropic: Enhanced tool content, `transformFinishReason`
    Note right of LLM: Azure AI Inference: Batch output, file uploads, new endpoints
    Note right of LLM: Bedrock: Multi-modal embeddings, improved tool handling, cache usage in response, `transformFinishReason`
    Note right of LLM: Cohere: Multi-modal embeddings
    LLM-->>-RM: LLM Response (raw)

    alt LLM Response is streaming
        RM->>+SH: handleStreamingMode(reader, transformer, ...)
        Note over SH: New: `try...catch...finally` for stream processing to ensure resource cleanup
        SH->>U: Stream Chunks (transformed)
        SH-->>-U: Stream End (writer.close() on error/completion)
    else LLM Response is complete
        RM->>RM: Transform Response (e.g., `transformFinishReason` for Anthropic/Bedrock)
        RM-->>-PG: Transformed Response
    end

    alt Circuit Breaker active
        RM->>CB: recordCircuitBreakerFailure(env, id, cbConfig, jsonPath, status)
        RM->>CB: handleCircuitBreakerResponse(response, id, cbConfig, jsonPath, c)
    end

    PG->>+HM: executeHooks(eventType='afterResponseHook', context, ...)
    HM-->>-PG: Hook Results

    PG-->>-U: Final API Response

    Note over U,PG: Example: Sutra Integration (from notebook)
    U->>PG: POST /v1/chat/completions (model="sutra-v2")
    PG->>SA: POST https://api.two.ai/v2/chat/completions (Authorization: Sutra API Key)
    SA-->>PG: Sutra Response
    PG-->>U: Transformed Response

Copy link

@matter-code-review matter-code-review bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This PR adds Bytez as a new chat model provider with a well-structured implementation. I've identified a few improvements that could enhance error handling and performance.

Copy link

Important

PR Review Skipped

PR review skipped as per the configuration setting. Run a manually review by commenting /matter review

💡Tips to use Matter AI

Command List

  • /matter summary: Generate AI Summary for the PR
  • /matter review: Generate AI Reviews for the latest commit in the PR
  • /matter review-full: Generate AI Reviews for the complete PR
  • /matter release-notes: Generate AI release-notes for the PR
  • /matter : Chat with your PR with Matter AI Agent
  • /matter remember : Generate AI memories for the PR
  • /matter explain: Get an explanation of the PR
  • /matter help: Show the list of available commands and documentation
  • Need help? Join our Discord server: https://discord.gg/fJU5DvanU3

Copy link

Important

PR Review Skipped

PR review skipped as per the configuration setting. Run a manually review by commenting /matter review

💡Tips to use Matter AI

Command List

  • /matter summary: Generate AI Summary for the PR
  • /matter review: Generate AI Reviews for the latest commit in the PR
  • /matter review-full: Generate AI Reviews for the complete PR
  • /matter release-notes: Generate AI release-notes for the PR
  • /matter : Chat with your PR with Matter AI Agent
  • /matter remember : Generate AI memories for the PR
  • /matter explain: Get an explanation of the PR
  • /matter help: Show the list of available commands and documentation
  • Need help? Join our Discord server: https://discord.gg/fJU5DvanU3

Copy link

Important

PR Review Skipped

PR review skipped as per the configuration setting. Run a manually review by commenting /matter review

💡Tips to use Matter AI

Command List

  • /matter summary: Generate AI Summary for the PR
  • /matter review: Generate AI Reviews for the latest commit in the PR
  • /matter review-full: Generate AI Reviews for the complete PR
  • /matter release-notes: Generate AI release-notes for the PR
  • /matter : Chat with your PR with Matter AI Agent
  • /matter remember : Generate AI memories for the PR
  • /matter explain: Get an explanation of the PR
  • /matter help: Show the list of available commands and documentation
  • Need help? Join our Discord server: https://discord.gg/fJU5DvanU3

@inf3rnus
Copy link
Author

inf3rnus commented Jun 26, 2025

Consider replacing the custom LRUCache implementation with a well-tested, battle-hardened library for caching to improve robustness and reduce potential maintenance overhead.

We do not want to bloat the code base with packages that aren't needed. It should be sufficient.

Enhance the constructFailureResponse function to map specific Bytez API error codes (e.g., 4xx client errors, 401 authentication errors) to appropriate HTTP status codes instead of defaulting to a generic 500. This provides more informative error responses to the client.

It already does this where necessary, otherwise whatever is passed from the server is passed to the client

Review the validateModelIsChat function's error handling. While it currently throws an error that is caught, consider if it should directly return a Response object using constructFailureResponse for consistency with other API error paths.

Overkill, either fetch is going to fail, or there is an upstream error, either way it will get reported to the client.

Add comprehensive unit tests for the new Bytez integration logic, specifically for bodyAdapter, LRUCache, and the chatComplete request handler, to ensure correct functionality and prevent regressions.

Perhaps in the future...

Verify the splitPattern = ' ' for Bytez in getStreamModeSplitPattern against Bytez's actual streaming API documentation to confirm it correctly handles their specific stream format, as this is an unusual pattern for streaming.

We currently stream character by character, we do not return JSON chunks. In the future we may update this.

Explore making the bodyAdapter logic more declarative or configurable, especially if future provider integrations might require similar but slightly varied request body transformations. This could improve maintainability and readability for complex adaptations.

YAGNI and overkill, bytez manages its own integration code.

Copy link

Important

PR Review Skipped

PR review skipped as per the configuration setting. Run a manually review by commenting /matter review

💡Tips to use Matter AI

Command List

  • /matter summary: Generate AI Summary for the PR
  • /matter review: Generate AI Reviews for the latest commit in the PR
  • /matter review-full: Generate AI Reviews for the complete PR
  • /matter release-notes: Generate AI release-notes for the PR
  • /matter : Chat with your PR with Matter AI Agent
  • /matter remember : Generate AI memories for the PR
  • /matter explain: Get an explanation of the PR
  • /matter help: Show the list of available commands and documentation
  • Need help? Join our Discord server: https://discord.gg/fJU5DvanU3

Copy link

Important

PR Review Skipped

PR review skipped as per the configuration setting. Run a manually review by commenting /matter review

💡Tips to use Matter AI

Command List

  • /matter summary: Generate AI Summary for the PR
  • /matter review: Generate AI Reviews for the latest commit in the PR
  • /matter review-full: Generate AI Reviews for the complete PR
  • /matter release-notes: Generate AI release-notes for the PR
  • /matter : Chat with your PR with Matter AI Agent
  • /matter remember : Generate AI memories for the PR
  • /matter explain: Get an explanation of the PR
  • /matter help: Show the list of available commands and documentation
  • Need help? Join our Discord server: https://discord.gg/fJU5DvanU3

Copy link

Important

PR Review Skipped

PR review skipped as per the configuration setting. Run a manually review by commenting /matter review

💡Tips to use Matter AI

Command List

  • /matter summary: Generate AI Summary for the PR
  • /matter review: Generate AI Reviews for the latest commit in the PR
  • /matter review-full: Generate AI Reviews for the complete PR
  • /matter release-notes: Generate AI release-notes for the PR
  • /matter : Chat with your PR with Matter AI Agent
  • /matter remember : Generate AI memories for the PR
  • /matter explain: Get an explanation of the PR
  • /matter help: Show the list of available commands and documentation
  • Need help? Join our Discord server: https://discord.gg/fJU5DvanU3

Copy link

Important

PR Review Skipped

PR review skipped as per the configuration setting. Run a manually review by commenting /matter review

💡Tips to use Matter AI

Command List

  • /matter summary: Generate AI Summary for the PR
  • /matter review: Generate AI Reviews for the latest commit in the PR
  • /matter review-full: Generate AI Reviews for the complete PR
  • /matter release-notes: Generate AI release-notes for the PR
  • /matter : Chat with your PR with Matter AI Agent
  • /matter remember : Generate AI memories for the PR
  • /matter explain: Get an explanation of the PR
  • /matter help: Show the list of available commands and documentation
  • Need help? Join our Discord server: https://discord.gg/fJU5DvanU3

@inf3rnus
Copy link
Author

inf3rnus commented Jul 9, 2025

@narengogi @VisargD Requested changes have been made, and I dropped the LRU.

Pls lmk if you need any further changes.

Copy link
Collaborator

@narengogi narengogi left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

looks good to go! adding two minor comments

finish_reason: 'stop',
},
],
usage: {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

please make the usage object openai compliant
https://portkey.ai/docs/api-reference/inference-api/chat#response-usage

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

also please move this function into chatComplete.ts

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@narengogi Alright, changes have been made!

We currently do not provide token usage metrics as part of our API (support for it will be added soon, although our billing model is a bit unique in that it's based around request concurrency).

Following other examples where the counts are not returned, I did this, this work for you guys?

If it's a hard requirement lmk, and I'll update our backend ASAP.

    usage: {
      completion_tokens: -1,
      prompt_tokens: -1,
      total_tokens: -1,
    },

Copy link

Important

PR Review Skipped

PR review skipped as per the configuration setting. Run a manually review by commenting /matter review

💡Tips to use Matter AI

Command List

  • /matter summary: Generate AI Summary for the PR
  • /matter review: Generate AI Reviews for the latest commit in the PR
  • /matter review-full: Generate AI Reviews for the complete PR
  • /matter release-notes: Generate AI release-notes for the PR
  • /matter : Chat with your PR with Matter AI Agent
  • /matter remember : Generate AI memories for the PR
  • /matter explain: Get an explanation of the PR
  • /matter help: Show the list of available commands and documentation
  • Need help? Join our Discord server: https://discord.gg/fJU5DvanU3

Copy link

Important

PR Review Skipped

PR review skipped as per the configuration setting. Run a manually review by commenting /matter review

💡Tips to use Matter AI

Command List

  • /matter summary: Generate AI Summary for the PR
  • /matter review: Generate AI Reviews for the latest commit in the PR
  • /matter review-full: Generate AI Reviews for the complete PR
  • /matter release-notes: Generate AI release-notes for the PR
  • /matter : Chat with your PR with Matter AI Agent
  • /matter remember : Generate AI memories for the PR
  • /matter explain: Get an explanation of the PR
  • /matter help: Show the list of available commands and documentation
  • Need help? Join our Discord server: https://discord.gg/fJU5DvanU3

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants