This repository was archived by the owner on Oct 14, 2024. It is now read-only.

Description
Pages
Info
The first paragraph reads:
This guide outlines the process for configuring Jan as a client for both remote and local API servers, using the mistral-ins-7b-q4 model for illustration. We'll show how to connect to Jan's API-hosting servers.
But later images are showing users how to change the endpoint and connect to Azure-like OpenAI proxy servers. The model in documentation is still GPTs. Not something about mistral-ins-7b-q4 or any other custom model APIs accepting OpenAI-like requests.
Are we using Any OpenAI-compatible API or OpenAI Proxying API? For me the former one means to specify endpoint, apikey, and most importantly the model to use. The latter, on the contrast, means to use an another endpoint with completely same layout as OpenAI's.
This is confusing and should be fixed.