This simple app demonstrates integration with the Zoom Realtime Media Streams SDK for Node.js.
The SDK is already included in package dependencies. Install other dependencies:
npm install
Copy the example environment file and fill in your credentials:
cp .env.example .env
Set your Zoom OAuth credentials:
ZM_RTMS_CLIENT=your_client_id
ZM_RTMS_SECRET=your_client_secret
Start the application:
npm start
For webhook testing with ngrok:
ngrok http 8080
Use the generated ngrok URL as your Zoom webhook endpoint. Then, start a meeting to see your data!
Here's how you can implement the SDK yourself.
ES Modules:
import rtms from "@zoom/rtms";
CommonJS:
const rtms = require('@zoom/rtms').default;
The SDK supports two approaches for connecting to meetings:
Use this for handling multiple concurrent meetings:
// Create clients for each meeting
const client = new rtms.Client();
// Set up callbacks
client.onAudioData((buffer, size, timestamp, metadata) => {
console.log(`π΅ Audio from ${metadata.userName}: ${size} bytes`);
});
client.onVideoData((buffer, size, timestamp, metadata) => {
console.log(`πΉ Video from ${metadata.userName}: ${size} bytes`);
});
client.onTranscriptData((buffer, size, timestamp, metadata) => {
const text = buffer.toString('utf8');
console.log(`π¬ ${metadata.userName}: ${text}`);
});
// Join the meeting
client.join({
meeting_uuid: "meeting-uuid",
rtms_stream_id: "stream-id",
server_urls: "wss://rtms.zoom.us"
});
Use this for simple single-meeting applications:
// Set up global callbacks
rtms.onAudioData((buffer, size, timestamp, metadata) => {
console.log(`π΅ Audio from ${metadata.userName}: ${size} bytes`);
});
rtms.onTranscriptData((buffer, size, timestamp, metadata) => {
const text = buffer.toString('utf8');
console.log(`π¬ ${metadata.userName}: ${text}`);
});
// Join the meeting
rtms.join({
meeting_uuid: "meeting-uuid",
rtms_stream_id: "stream-id",
server_urls: "wss://rtms.zoom.us"
});
Set up webhook handling to automatically connect when meetings start:
// Listen for Zoom webhook events
rtms.onWebhookEvent(({ event, payload }) => {
if (event === "meeting.rtms_started") {
const client = new rtms.Client();
// Configure callbacks
client.onAudioData((buffer, size, timestamp, metadata) => {
// Process audio data
});
// Join using webhook payload
client.join(payload);
}
});
Configure audio, video, and deskshare processing parameters before joining:
client.setAudioParams({
contentType: rtms.AudioContentType.RAW_AUDIO,
codec: rtms.AudioCodec.OPUS,
sampleRate: rtms.AudioSampleRate.SR_16K,
channel: rtms.AudioChannel.STEREO,
dataOpt: rtms.AudioDataOption.AUDIO_MIXED_STREAM,
duration: 20, // 20ms frames
frameSize: 640 // 16kHz * 2 channels * 20ms
});
client.setVideoParams({
contentType: rtms.VideoContentType.RAW_VIDEO,
codec: rtms.VideoCodec.H264,
resolution: rtms.VideoResolution.HD,
dataOpt: rtms.VideoDataOption.VIDEO_SINGLE_ACTIVE_STREAM,
fps: 30
});
client.setDeskshareParams({
contentType: rtms.VideoContentType.RAW_VIDEO,
codec: rtms.VideoCodec.H264,
resolution: rtms.VideoResolution.FHD,
dataOpt: rtms.VideoDataOption.VIDEO_SINGLE_ACTIVE_STREAM,
fps: 15
});
onJoinConfirm(reason)
- β Join confirmationonSessionUpdate(op, sessionInfo)
- π Session state changesonUserUpdate(op, participantInfo)
- π₯ Participant join/leaveonAudioData(buffer, size, timestamp, metadata)
- π΅ Audio dataonVideoData(buffer, size, timestamp, metadata)
- πΉ Video dataonTranscriptData(buffer, size, timestamp, metadata)
- π¬ Live transcriptiononLeave(reason)
- π Meeting ended
For complete parameter options and detailed documentation:
- π΅ Audio Parameters - Complete audio configuration options
- πΉ Video Parameters - Complete video configuration options
- π₯οΈ Deskshare Parameters - Complete deskshare configuration options
- π Full API Documentation - Complete SDK reference