From b80b074de9b210c9dfbe07bada78d0eaa7c87c9f Mon Sep 17 00:00:00 2001 From: amitamrutiya2210 Date: Sat, 24 Feb 2024 10:52:57 +0530 Subject: [PATCH] docs: add documentation for the newly added hugging face provider Signed-off-by: amitamrutiya2210 --- docs/reference/providers/backend.md | 19 ++++++++++++++++++- 1 file changed, 18 insertions(+), 1 deletion(-) diff --git a/docs/reference/providers/backend.md b/docs/reference/providers/backend.md index caccf49..749385d 100644 --- a/docs/reference/providers/backend.md +++ b/docs/reference/providers/backend.md @@ -10,6 +10,7 @@ Currently, we have a total of 8 backends available: - [Amazon SageMaker](https://aws.amazon.com/sagemaker/) - [Azure OpenAI](https://azure.microsoft.com/en-us/products/cognitive-services/openai-service) - [Google Gemini](https://ai.google.dev/docs/gemini_api_overview) +- [Hugging Face](https://huggingface.co) - [LocalAI](https://github.com/go-skynet/LocalAI) - FakeAI @@ -23,7 +24,7 @@ OpenAI is the default backend for K8sGPT. We recommend using OpenAI first if you ``` - To set the token in K8sGPT, use the following command: ```bash - k8sgpt auth add + k8sgpt auth add ``` - Run the following command to analyze issues within your cluster using OpenAI: ```bash @@ -117,6 +118,22 @@ Google [Gemini](https://blog.google/technology/ai/google-gemini-ai/#performance) k8sgpt analyze --explain --backend google ``` +## HuggingFace + +Hugging Face is a versatile backend for K8sGPT, offering access to a wide range of pre-trained language models. It provides easy-to-use interfaces for both training and inference tasks. Refer to the Hugging Face [documentation](https://huggingface.co/docs) for further insights into model usage and capabilities. + +- To use Hugging Face API in K8sGPT, obtain [the API key](https://huggingface.co/settings/tokens). +- Configure the HuggingFace backend in K8sGPT by specifying the desired model (see all [models](https://huggingface.co/models) here) using auth command: + ```bash + k8sgpt auth add --backend huggingface --model + ``` +> NOTE: Since the default gpt-3.5-turbo model is not available in Hugging Face, a valid backend model is required. + +- Once configured, you can analyze issues within your cluster using the Hugging Face provider with the following command: + ```bash + k8sgpt analyze --explain --backend huggingface + ``` + ## LocalAI LocalAI is a local model, which is an OpenAI compatible API. It uses llama.cpp and ggml to run inference on consumer-grade hardware. Models supported by LocalAI for instance are Vicuna, Alpaca, LLaMA, Cerebras, GPT4ALL, GPT4ALL-J and koala.