Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Basic example from npm - audio doesn't work #643

Closed
methodbox opened this issue Dec 12, 2019 · 2 comments
Closed

Basic example from npm - audio doesn't work #643

methodbox opened this issue Dec 12, 2019 · 2 comments

Comments

@methodbox
Copy link

I'm not sure if I'm missing something, but this example from npm seems to only record video.

Can someone answer how to capture the audio, too? It seems like it should, based on the config, but all I get is webm without audio.

navigator.mediaDevices.getUserMedia({
    video: true,
    audio: true
}).then(async function(stream) {
    let recorder = RecordRTC(stream, {
        type: 'video'
    });
    recorder.startRecording();
 
    const sleep = m => new Promise(r => setTimeout(r, m));
    await sleep(3000);
 
    recorder.stopRecording(function() {
        let blob = recorder.getBlob();
        invokeSaveAsDialog(blob);
    });
});
@methodbox
Copy link
Author

I should mention I'm using this for getDisplayMedia in React like this:

_startScreenCapture() {
    navigator.mediaDevices
      .getDisplayMedia({
        video: true,
        audio: true
      })
      .then(stream => {
        let recorder = RecordRTC(stream, {
          type: "video"
        });
        recorder.startRecording();
        this.setState({ recorder: recorder, stream: stream });
      });
  }

I think the issue is I need to also use getUserMedia and add an audio track, but I'm unclear how to achieve this.

@methodbox
Copy link
Author

For anyone looking for a solution, I figured this out by referencing this page:

https://jmperezperez.com/mediarecorder-api-screenflow/

And this issue: muaz-khan/RecordRTC#181

Long story short, you need to create one new, empty stream, two source tracks, one for video and one for audio, and then use addTrack() to add those tracks to the stream, then feed the stream to the recorder.

Source Tracks > Empty Stream > Recorder

This was my solution (used in React):

_startScreenCapture() {
    const videoSource = () =>
      navigator.mediaDevices.getDisplayMedia({
        video: { mediaSource: "screen" }
      });

    const audioSource = () =>
      navigator.mediaDevices.getUserMedia({ audio: true });

    videoSource().then(vid => {
      audioSource()
        .then(audio => {
          const combinedStream = new MediaStream();
          const vidTrack = vid.getVideoTracks()[0];
          const audioTrack = audio.getAudioTracks()[0];

          combinedStream.addTrack(vidTrack);
          combinedStream.addTrack(audioTrack);
          return combinedStream;
        })
        .then(stream => {
          console.log(stream);
          let recorder = RecordRTC(stream, {
            // audio, video, canvas, gif
            type: "video",
            mimeType: "video/webm",
            recorderType: MediaStreamRecorder,
            disableLogs: true,
            timeSlice: 1000,
            bitsPerSecond: 128000,
            audioBitsPerSecond: 128000,
            videoBitsPerSecond: 128000,
            frameInterval: 90
          });
          recorder.startRecording();
          this.setState({ recorder: recorder, stream: stream });
        });
    });
  }

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant