Skip to content

m1guelpf/openai-realtime-proxy

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

openai-realtime-proxy

Safely deploy OpenAI's Realtime APIs in less than 5 minutes!

crates.io download count badge docs.rs

The OpenAI Realtime API provides a seamless voice-to-voice conversation experience. To reduce latency, it establishes a WebSocket connection between the client and the backend. However, production apps likely need a proxy sitting in the middle to handle authentication, rate limiting, and avoid leaking sensitive data.

This library takes care of the proxying part, allowing you to focus on the rest of your application.

use axum::{extract::WebSocketUpgrade, response::IntoResponse, routing::get, Router};

#[tokio::main]
async fn main() {
    let app = Router::new().route("/ws", get(ws_handler));

    let listener = tokio::net::TcpListener::bind("0.0.0.0:3000").await.unwrap();
    axum::serve(listener, app).await.unwrap();
}

async fn ws_handler(ws: WebSocketUpgrade) -> impl IntoResponse {
    // check for authentication/access/etc. here

    let proxy = realtime_proxy::Proxy::new(
        std::env::var("OPENAI_API_KEY").expect("OPENAI_API_TOKEN env var not set.")
    );

    ws.on_upgrade(|socket| proxy.handle(socket))
}

Refer to the documentation on docs.rs for detailed usage instructions.

License

This project is licensed under the MIT License - see the LICENSE file for details.

About

Safely deploy OpenAI's Realtime APIs in less than 5 minutes!

Resources

License

Stars

Watchers

Forks

Sponsor this project

 

Languages