Skip to content

OpenAI, Anthropic, and Gemini integrations with support for local LLMs

Notifications You must be signed in to change notification settings

zeitlings/ayai-gpt-nexus

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

20 Commits
 
 
 
 
 
 

Repository files navigation

ChatGPT, Claude, Perplexity, and Gemini integrations for chat, information retrieval, and text processing tasks, such as paraphrasing, simplifying, or summarizing. With support for third party proxies and local LLMs.


Note

This is an alpha preview version of the workflow. You can download it here: Ayai · GPT Nexus


B. Usage

Keyword

  • to continue the ongoing chat.
  • to start a new conversation.
  • to view the chat history.
  • Hidden Option
    • to open the workflow configuration.

Chat Window

  • to ask a question.
  • to start a new conversation.
  • to copy the last answer.
  • to copy the full conversation.
  • to stop generating an answer.
  • Hidden Options
    • ⇧⌥⏎ to show configuration info in HUD
    • ⇧⌃⏎ to speak the last answer out loud
    • ⇧⌘⏎ to edit multi-line prompt in separate window
      • to switch between Editor / Markdown preview
      • to ask the question.
      • ⇧⌘⏎ to start a new conversation.

Chat History

  • Type to filter archived chats based on your query.
  • to continue previous chat.
  • to view the modification date.
  • L to inspect the unabridged preview as large type.
  • to send the conversation to the trash.

C. Prompting

A prompt is the text that you give the model to elicit, or "prompt," a relevant output. A prompt is usually in the form of a question or instructions.

References

D. Configuration

Primary

The primary configuration setting determines the service that is used for conversations.

OpenAI Proxies1

If you want to use a third party proxy, define the correlating host, path, API key, model, and if required the url scheme or port in the environment variables. The variables are prefixed as alternatives to OpenAI, because Ayai expects the returned stream events and errors to mirror the shape of those returned by the OpenAI API.

Local LM's2

If you want to use a local language model, define the correlating url scheme, host, port, path, and if required the model in the environment variables to establish a connection to the local HTTP initiated and maintained by the method of your choice. The variables are prefixed as alternatives to OpenAI, because Ayai expects the returned stream events and errors to mirror the shape of those returned by the OpenAI API.

Footnotes

  1. Third party proxies such as OpenRouter, Groq, Fireworks or Together.ai

  2. Local HTTP servers can be set up with interfaces such as LM Studio or Ollama