Conversation
|
The latest updates on your projects. Learn more about Vercel for Git ↗︎
|
|
Hi @ankykong would this script work for gemini? const WebSocket = require("ws");
const url = "ws://0.0.0.0:4000/v1/realtime?model=gpt-4o-realtime-preview";
const ws = new WebSocket(url, {
headers: {
"Authorization": `Bearer sk-1234`,
"OpenAI-Beta": "realtime=v1",
},
});
ws.on("open", function open() {
console.log("Connected to server.");
ws.send(JSON.stringify({
type: "response.create",
response: {
modalities: ["text"],
instructions: "Please assist the user.",
}
}));
});
ws.on("message", function incoming(message) {
console.log(JSON.parse(message.toString()));
});
ws.on("error", function handleError(error) {
console.error("Error: ", error);
}); |
Yes, more or less. Just a couple changes: import json
import websocket
from dotenv import load_dotenv
import os
load_dotenv()
# URL for the WebSocket connection
url = "ws://0.0.0.0:4000/v1/realtime?model=gemini-live"
token = os.getenv("GOOGLE_TOKEN")
# Headers for the connection - ensure these are properly formatted
headers = {
"Content-Type": "application/json",
"Authorization": f"Bearer {token}",
}
def on_open(ws):
print("Connected to server.")
def on_message(ws, message):
try:
print(json.loads(message))
except json.JSONDecodeError:
print(f"Received non-JSON message: {message}")
def on_error(ws, error):
print(f"Error: {error}")
def on_close(ws, close_status_code, close_msg):
print(f"Connection closed: {close_status_code} - {close_msg}")
# Create WebSocket connection with proper header handling
ws = websocket.WebSocketApp(
url,
header=[f"{k}: {v}" for k, v in headers.items()], # Format headers correctly
on_open=on_open,
on_message=on_message,
on_error=on_error,
on_close=on_close
)
# Start the WebSocket connection
if __name__ == "__main__":
# Enable trace for debugging if needed
websocket.enableTrace(True)
ws.run_forever()mainly the headers! Otherwise it is functional :)
|
Also, the config file has to have the important updates: - model_name: gemini-live
litellm_params:
model: vertexai/gemini-2.0-flash
vertex_project: "project-id"
vertex_location: "us-central1"
vertex_credentials: "/Users/ankurduggal/Desktop/Open Source/litellm/credentials.json" |
that's not really the same though, you have to change the input as well. The point of litellm is to unify it in the same format, so that developers don't need to add if/else statements in their code when going across providers |
Sorry, the config is not necessary. I forgot to remove it. |
|
Can i pass in this input
and expect the call to work? |
Yeah, it should work now :) |
|
|
||
| url = self._construct_url(api_base) | ||
|
|
||
| config = { |
|
|
|
This pull request has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. |

Title
Integrates Gemini Mulitmodal Live Websocket Connection
Relevant issues
Adds in Feature #7294
Pre-Submission checklist
Please complete all items before asking a LiteLLM maintainer to review your PR
tests/litellm/directory, Adding at least 1 test is a hard requirement - see detailsmake test-unit)[https://docs.litellm.ai/docs/extras/contributing_code]Type
🆕 New Feature
Changes
Added in support for Gemini Live