Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Feature]: Support Custom AI backends. #1072

Open
2 tasks done
atul86244 opened this issue Apr 19, 2024 · 5 comments
Open
2 tasks done

[Feature]: Support Custom AI backends. #1072

atul86244 opened this issue Apr 19, 2024 · 5 comments

Comments

@atul86244
Copy link

Checklist

  • I've searched for similar issues and couldn't find anything matching
  • I've discussed this feature request in the K8sGPT Slack and got positive feedback

Is this feature request related to a problem?

No

Problem Description

Please add support to use custom AI backends with k8sGPT. This would help people use k8sGPT along with in house AI backends leading to increase in adoption of k8sGPT.

Solution Description

Need the ability to use k8sGPT along with custom in house AI backends. For example - I want to use k8sGPT in my company and use the company AI solution as the AI backend for k8sGPT.

Benefits

This would help people use k8sGPT along with in house AI backends leading to increase in adoption of k8sGPT.

Potential Drawbacks

No response

Additional Information

No response

@arbreezy
Copy link
Member

Hey @atul86244,
We support OpenAI's API spec, are you having a different use-case in your mind ?

@atul86244
Copy link
Author

Hi @arbreezy , thanks for your response. I was going through this doc https://docs.k8sgpt.ai/reference/providers/backend/ and was trying to figure out how can I point k8sGPT to my company's AI backend. If I have my own custom AI which exposes an endpoint then can I point k8sGPT to it?

I am not sure if the spec below provides a way to do that:

kubectl apply -f - << EOF
apiVersion: core.k8sgpt.ai/v1alpha1
kind: K8sGPT
metadata:
  name: k8sgpt-sample
  namespace: k8sgpt-operator-system
spec:
  ai:
    enabled: true
    model: gpt-3.5-turbo
    backend: openai
    secret:
      name: k8sgpt-sample-secret
      key: openai-api-key
    # anonymized: false
    # language: english
  noCache: false
  repository: ghcr.io/k8sgpt-ai/k8sgpt
  version: v0.3.8
  #integrations:
  # trivy:
  #  enabled: true
  #  namespace: trivy-system
  # filters:
  #   - Ingress
  # sink:
  #   type: slack
  #   webhook: <webhook-url> # use the sink secret if you want to keep your webhook url private
  #   secret:
  #     name: slack-webhook
  #     key: url
  #extraOptions:
  #   backstage:
  #     enabled: true
EOF

@boixu
Copy link

boixu commented Apr 26, 2024

I am also interested in this.
I have a custom API endpoint that supports openAI API spec but tinyllama nor localAI have auth tokens which my endpoint needs.
Can we either add a custom baseURL field to openai provider or auth token field to localAI or tinyllama?
Please correct me if this already exists.

Thanks!

@atul86244
Copy link
Author

Hi Team, can you please help with this.

@haofeif
Copy link

haofeif commented Sep 4, 2024

+1. Many of users or corporations host their various LLM over self-hosted API (i.e. AWS API Gateway, Kong API) via REST protocol regardless of the LLM models sitting behind. In this case, it will be a request call to the bakckend API

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Status: Proposed
Development

No branches or pull requests

4 participants