An open-source Swift package that simplifies LLM message completions, inspired by liteLLM and adapted for Swift developers, following Swift conventions.
Call different LLM APIs using the OpenAI format; currently supporting OpenAI, Anthropic, and Gemini.
- Open your Swift project in Xcode.
- Go to
File
->Add Package Dependency
. - In the search bar, enter this URL.
- Choose the version you'd like to install.
- Click
Add Package
.
Remember that your API keys are a secret! Do not share it with others or expose it in any client-side code (browsers, apps). Production requests must be routed through your backend server where your API keys can be securely loaded from an environment variable or key management service.
To interface with different LLMs, you need only to supply the corresponding LLM configuration and adjust the parameters accordingly.
First, import the PolyAI package:
import PolyAI
Then, define the LLM configurations. Currently, OpenAI, Anthropic and Gemini are supported:
let openAIConfiguration: LLMConfiguration = .openAI(.api(key: "your_openai_api_key_here"))
let anthropicConfiguration: LLMConfiguration = .anthropic(apiKey: "your_anthropic_api_key_here")
let geminiConfiguration: LLMConfiguration = .gemini(apiKey: "your_gemini_api_key_here")
let configurations = [openAIConfiguration, anthropicConfiguration, geminiConfiguration]
With the configurations set, initialize the service:
let service = PolyAIServiceFactory.serviceWith(configurations)
Now, you have access to OpenAI, Anthropic and Gemini APIs in a single package. 🚀
To send a message using OpenAI:
let prompt = "How are you today?"
let parameters: LLMParameter = .openAI(model: .gpt4turbo, messages: [.init(role: .user, content: prompt)])
let stream = try await service.streamMessage(parameters)
To interact with Anthropic instead, all you need to do is change just one line of code! 🔥
let prompt = "How are you today?"
let parameters: LLMParameter = .anthropic(model: .claude3Sonnet, messages: [.init(role: .user, content: prompt)], maxTokens: 1024)
let stream = try await service.streamMessage(parameters)
To interact with Gemini instead, all you need to do (again) is change just one line of code! 🔥
let prompt = "How are you today?"
let parameters: LLMParameter = .gemini(model: ""gemini-1.5-pro-latest"", messages: [.init(role: .user, content: prompt)], maxTokens: 2000)
let stream = try await service.streamMessage(parameters)
To access the OpenAI API via Azure, you can use the following configuration setup.
let azureConfiguration: LLMConfiguration = .openAI(.azure(configuration: .init(resourceName: "YOUR_RESOURCE_NAME", openAIAPIKey: .apiKey("YOUR_API_KEY"), apiVersion: "THE_API_VERSION")))
More information can be found here.
To access the OpenAI API via AIProxy, use the following configuration setup.
let aiProxyConfiguration: LLMConfiguration = .openAI(.aiProxy(aiproxyPartialKey: "hardcode_partial_key_here", aiproxyDeviceCheckBypass: "hardcode_device_check_bypass_here"))
More information can be found here.
Open a PR for any proposed change pointing it to main
branch.