Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Proposal: Specify ability to pause and resume between adding and removing MediaStreamTracks to an active MediaStream #147

Open
guest271314 opened this issue Mar 18, 2018 · 6 comments

Comments

@guest271314
Copy link

commented Mar 18, 2018

It is presently possible to record multiple video or audio to a single webm Blob using canvas.captureStream() and AudioContext.createMediaStreamDestination(). This functionality should be possible

  1. Adding and removing MediaStreamTracks to an active MediaStream;
  2. Changing the value of src attribute of an HTMLMediaElement where an active MediaStream is set at srcObject

For 1. the MediaStream does not reach an inactive state; video or audio tracks are added to the stream and video and/or audio tracks previously rendered at the element are removed from the MediaStream.

For 2. the MediaStream becomes inactive momentarily while the media element loads the new media.

For either case the developer should have some means to pause and resume recording of the media while the tracks are being added and removed, or only added to the MediaStream. The MediaRecorder instance should resume recording the newly added tracks to the previously recorded data when the new MediaStreamTracks are active.

Currently, when a MediaStreamTrack is added to an active MediaStream, only the initial MediaStreamTracks are recorded; though if tried enough there could be cases where parts of audio or video from subsequent MediaStreamTracks are recorded. This behaviour should be consistent; with the user having the clear ability to pause and resume MediaRecorder between adding and removing multiple MediaStreamTracks, with the result being a single file comprising the totality of recorded media.

@guest271314

This comment has been minimized.

Copy link
Author

commented Mar 18, 2018

Related #4

@guest271314

This comment has been minimized.

Copy link
Author

commented Apr 2, 2019

@yellowdoge cc @Pehrsons Re this issue and w3c/mediacapture-main#575, one solution would be to provide an option, e.g., {captureAsScreen:true} else defaults to current implementation, at HTMLMediaElement.captureStream(), particularly <video> element media source and/or MediaRecorder() which would treat the <video> element as a screen or device in the same manner as

navigator.mediaDevices.getDisplayMedia({video}) // Chromium

or

navigator.mediaDevices.getUserMedia({video: {mediaSource: "screen"})

where the stream does not change state to "inactive" when the src property of the <video> is changed or the media resource has ended playback.

video.captureStream({captureAsScreen:true})

and/or

new MediaRecorder(stream, {captureAsScreen:true}).

Created a proof of concept for Chromium. Will dive into Mozilla Firefox version within a day or so.

@guest271314

This comment has been minimized.

Copy link
Author

commented Apr 2, 2019

@yellowdoge cc @Pehrsons Some code

(async() => {
  const video = document.createElement("video");
  video.id = "video";
  document.body.appendChild(video);
  document.head.insertAdjacentHTML("beforeend", `<style>#video {cursor:none} video::-webkit-media-controls,audio::-webkit-media-controls {display:none !important;}</style>`)
  const stream = await navigator.mediaDevices.getDisplayMedia({
    video: {
      cursor: "never" // this has little/no effect https://github.com/web-platform-tests/wpt/issues/16206
    }
  });

  console.log(stream, stream.getTracks());

  let done;
  const promise = new Promise(resolve => done = resolve);

  document.addEventListener("click", async e => {
    // setTimeout(() => document.body.requestPointerLock(), 5000);
    // try to avoid using Pointer Lock API
    await video.requestFullscreen({
      navigationUI: "hide"
    });

    let urls = await Promise.all([{
      src: "https://upload.wikimedia.org/wikipedia/commons/a/a4/Xacti-AC8EX-Sample_video-001.ogv",
      from: 0,
      to: 4
    }, {
      src: "https://mirrors.creativecommons.org/movingimages/webm/ScienceCommonsJesseDylan_240p.webm#t=10,20"
    }, {
      from: 55,
      to: 60,
      src: "https://nickdesaulniers.github.io/netfix/demo/frag_bunny.mp4"
    }, {
      from: 0,
      to: 5,
      src: "https://raw.githubusercontent.com/w3c/web-platform-tests/master/media-source/mp4/test.mp4"
    }, {
      from: 0,
      to: 5,
      src: "https://commondatastorage.googleapis.com/gtv-videos-bucket/sample/ForBiggerBlazes.mp4"
    }, {
      from: 0,
      to: 5,
      src: "https://commondatastorage.googleapis.com/gtv-videos-bucket/sample/ForBiggerJoyrides.mp4"
    }, {
      src: "https://commondatastorage.googleapis.com/gtv-videos-bucket/sample/ForBiggerMeltdowns.mp4#t=0,6"
    }].map(async({...props
    }) => {
      const {
        src
      } = props;
      const blob = (await (await fetch(src)).blob());
      return {
        blob, ...props
      }
    }));


    const context = new AudioContext();
    const mixedAudio = context.createMediaStreamDestination();
    const [audioTrack] = mixedAudio.stream.getAudioTracks();
    const [videoTrack] = stream.getVideoTracks();

    videoTrack.cursor = "never"; // this has little/no effect
    videoTrack.applyConstraints({
      cursor: "never"
    }); // this has little/no effect
    console.log(videoTrack.cursor, videoTrack.getConstraints(), videoTrack.getSettings());
    const mediaStream = new MediaStream([videoTrack, audioTrack]);

    [videoTrack, audioTrack].forEach(track => {
      track.onended = e => console.log(e);
    });

    const source = context.createMediaElementSource(video);
    source.connect(context.destination);
    source.connect(mixedAudio);
    const recorder = new MediaRecorder(mediaStream, {
      mimeType: "video/webm;codecs=vp8,opus",
      audioBitsPerSecond: 128000,
      videoBitsPerSecond: 2500000
    });
    recorder.addEventListener("error", e => {
      console.error(e)
    });
    recorder.addEventListener("dataavailable", e => {
      console.log(e.data);
      done(URL.createObjectURL(e.data));
    });
    recorder.addEventListener("stop", e => {
      console.log(e);
      [videoTrack, audioTrack].forEach(track => track.stop());
    });

    try {
      for (let [index, {
          from, to, src, blob
        }] of urls.entries()) {
        await new Promise(resolve => {
          const url = new URL(src);
          if (url.hash.length) {
            [from, to] = url.hash.match(/\d+|\d+\.\d+/g).map(Number);
          }

          const blobURL = URL.createObjectURL(blob);

          video.addEventListener("canplay", e => {
            video.controls = false;
            // wait for fullscreen notification to toggle to off
            setTimeout(() => video.play(), index === 0 ? 7000 : 0);
          }, {
            once: true
          });

          video.addEventListener("playing", e => {
            if (recorder.state === "inactive") {
              recorder.start();
            }
          }, {
            once: true
          });


          video.addEventListener("pause", e => {
            resolve();
          }, {
            once: true
          });

          video.src = `${blobURL}#t=${from},${to}`;
        })
      }
      recorder.stop();
      // document.exitPointerLock();
      await document.exitFullscreen();
    } catch (e) {
      throw e;
    }
  }, {
    once: true
  });

  return await promise;
})()
.then(console.log, console.error);
@guest271314

This comment has been minimized.

Copy link
Author

commented Apr 4, 2019

Similar at Firefox 68 (Nightly) using the same code.

@guest271314

This comment has been minimized.

Copy link
Author

commented Apr 7, 2019

@yellowdoge cc @Pehrsons Reading the relevant specifications

Media Capture from DOM Elements

3. HTML Media Element Media Capture Extensions

Both MediaStream and HTMLMediaElement expose the concept of a track. Since there is no common type used for HTMLMediaElement, this document uses the term track to refer to either VideoTrack or AudioTrack. MediaStreamTrack is used to identify the media in a MediaStream.

HTML Standard

4.8.12.10 Media resources with multiple media tracks

A media resource can have multiple embedded audio and video tracks. For example, in addition to the primary video and audio tracks, a media resource could have foreign-language dubbed dialogues, director's commentaries, audio descriptions, alternative angles, or sign-language overlays.

There are only ever one AudioTrackList object and one VideoTrackList object per media element, even if another media resource is loaded into the element: the objects are reused. (The AudioTrack and VideoTrack objects are not, though.)

WebRTC 1.0: Real-time Communication Between Browsers

5.2 RTCRtpSender Interface

replaceTrack

6.4.3. If sending is true, and withTrack is not null, have the sender switch seamlessly to transmitting withTrack instead of the sender's existing track. (emphasis added)

NOTE

Changing dimensions and/or frame rates might not require negotiation. Cases that may require negotiation include:

  1. Changing a resolution to a value outside of the negotiated imageattr bounds, as described in [RFC6236].
  2. Changing a frame rate to a value that causes the block rate for the codec to be exceeded.
  3. A video track differing in raw vs. pre-encoded format.
  4. An audio track having a different number of channels.
  5. Sources that also encode (typically hardware encoders) might be unable to produce the negotiated codec; similarly, software sources might not implement the codec that was negotiated for an encoding source.

(issues re replaceTrack)

MediaStream Recording

2.3. Methods

start(optional unsigned long timeslice)

  1. If at any point, a track is added to or removed from the stream's track set, the UA MUST immediately stop gathering data, discard any data that it has gathered, and queue a task, using the DOM manipulation task source, that runs the following steps:

    1 Set state to inactive.
    2 Fire an error event named InvalidModificationError at target.
    3 Fire a blob event named dataavailable at target with blob.
    4 Fire an event named stop at target.

(issues re MediaRecorder and multiple video tracks)

et al., the language in MediaStream Recording does not specifically state that a video track cannot be replaced by another video track. To that end RTCRtpSender.replaceTrack()

The RTCRtpSender method replaceTrack() replaces the track currently being used as the sender's source with a new MediaStreamTrack. The new track must be of the same media kind (audio, video, etc) and switching the track should not require negotiation.

can be used to "seamlessly" replace a video track having the same codecs and constraints.

This issue should be construed as a request for an enhancement of MediaRecorder to add a replaceTrack method having the same, similar or enhanced functionality of RTCRtpSender.replaceTrack() (without having to explicitly use PeerConnections; e.g., recorderInstance.replaceTrack(withTrack)) which should make it possible (given the same codecs and constraints; or even different codecs and constraints) to replace the current video and/or audio track with a new track.

Composed a proof of concept using two WebRTC PeerConnection()s, which should not be necessary once the method is added to the MediaRecorder object, to wit in code

<!DOCTYPE html>
<html>

<head>
  <title>Record media fragments to single webm video using AudioCAudioContext.createMediaStreamDestination(), canvas.captureStream(), PeerConnection(), RTCRtpSender.replaceTrack(), MediaRecorder()</title>
  
  <!--
  Try to achieve requirement using only native browser API, without any libraries
  <script src="https://cdnjs.cloudflare.com/ajax/libs/webrtc-adapter/6.4.0/adapter.min.js"></script>
  -->
  <!-- 
    Without using adapter.js at Chromium if {once: true} is not used at "icecandidate" event

    Uncaught (in promise) TypeError: Failed to execute 'addIceCandidate' on 'RTCPeerConnection': Candidate missing values for both sdpMid and sdpMLineIndex
    at RTCPeerConnection.
    Uncaught (in promise) TypeError: Failed to execute 'addIceCandidate' on 'RTCPeerConnection': Candidate missing values for both sdpMid and sdpMLineIndex
    at RTCPeerConnection.
  -->
  <!-- Without using adapter.js at Firefox

    SecurityError: The operation is insecure. debugger eval code:152
    mediaStreamTrackPromise debugger eval code:152
    dispatchEvent resource://gre/modules/media/PeerConnection.jsm:707
    _processTrackAdditionsAndRemovals resource://gre/modules/media/PeerConnection.jsm:1324
    onSetRemoteDescriptionSuccess resource://gre/modules/media/PeerConnection.jsm:1661
    haveSetRemote resource://gre/modules/media/PeerConnection.jsm:1032
    haveSetRemote resource://gre/modules/media/PeerConnection.jsm:1029
    AsyncFunctionNext self-hosted:839

    at `resolve()`

   -->
  <!-- 
    With using adapter.js at Firefox even with {once: true} set at "icecandidate" event

    from 
    icecandidate { target: RTCPeerConnection, isTrusted: true, candidate: RTCIceCandidate, srcElement: RTCPeerConnection, currentTarget: RTCPeerConnection, eventPhase: 2, bubbles: false, cancelable: false, returnValue: true, defaultPrevented: false, … }
Tc5OsSypnNbxzidJ:114:21
    from 
    icecandidate { target: RTCPeerConnection, isTrusted: true, candidate: RTCIceCandidate, srcElement: RTCPeerConnection, currentTarget: RTCPeerConnection, eventPhase: 2, bubbles: false, cancelable: false, returnValue: true, defaultPrevented: false, … }
    Tc5OsSypnNbxzidJ:114:21
    from 
    icecandidate { target: RTCPeerConnection, isTrusted: true, candidate: RTCIceCandidate, srcElement: RTCPeerConnection, currentTarget: RTCPeerConnection, eventPhase: 2, bubbles: false, cancelable: false, returnValue: true, defaultPrevented: false, … }
Tc5OsSypnNbxzidJ:114:21
    from 
icecandidate { target: RTCPeerConnection, isTrusted: true, candidate: RTCIceCandidate, srcElement: RTCPeerConnection, currentTarget: RTCPeerConnection, eventPhase: 2, bubbles: false, cancelable: false, returnValue: true, defaultPrevented: false, … }
Tc5OsSypnNbxzidJ:114:21
    from 
    icecandidate { target: RTCPeerConnection, isTrusted: true, candidate: RTCIceCandidate, srcElement: RTCPeerConnection, currentTarget: RTCPeerConnection, eventPhase: 2, bubbles: false, cancelable: false, returnValue: true, defaultPrevented: false, … }
Tc5OsSypnNbxzidJ:114:21
    from 
    icecandidate { target: RTCPeerConnection, isTrusted: true, srcElement: RTCPeerConnection, currentTarget: RTCPeerConnection, eventPhase: 2, bubbles: false, cancelable: false, returnValue: true, defaultPrevented: false, composed: false, … }
Tc5OsSypnNbxzidJ:114:21
    to 
    icecandidate { target: RTCPeerConnection, isTrusted: true, candidate: RTCIceCandidate, srcElement: RTCPeerConnection, currentTarget: RTCPeerConnection, eventPhase: 2, bubbles: false, cancelable: false, returnValue: true, defaultPrevented: false, … }
Tc5OsSypnNbxzidJ:123:21
    to 
icecandidate { target: RTCPeerConnection, isTrusted: true, candidate: RTCIceCandidate, srcElement: RTCPeerConnection, currentTarget: RTCPeerConnection, eventPhase: 2, bubbles: false, cancelable: false, returnValue: true, defaultPrevented: false, … }
Tc5OsSypnNbxzidJ:123:21
    to 
    icecandidate { target: RTCPeerConnection, isTrusted: true, candidate: RTCIceCandidate, srcElement: RTCPeerConnection, currentTarget: RTCPeerConnection, eventPhase: 2, bubbles: false, cancelable: false, returnValue: true, defaultPrevented: false, … }
Tc5OsSypnNbxzidJ:123:21
    to 
    icecandidate { target: RTCPeerConnection, isTrusted: true, srcElement: RTCPeerConnection, currentTarget: RTCPeerConnection, eventPhase: 2, bubbles: false, cancelable: false, returnValue: true, defaultPrevented: false, composed: false, … }
Tc5OsSypnNbxzidJ:123:21
  -->

</head>

<body>
  <h1 id="click">click</h1>
  <video id="video" src="" controls="true" autoplay="true"></video>
  <video id="playlist" src="" controls="true" muted="true"></video>
  <script>
    const captureStream = mediaElement =>
      !!mediaElement.mozCaptureStream ? mediaElement.mozCaptureStream() : mediaElement.captureStream();

    const width = 320;
    const height = 240;
    const videoConstraints = {
      frameRate: 60,
      resizeMode: "crop-and-scale",
      width,
      height
    };
    const blobURLS = [];
    const urls = Promise.all([{
      src: "https://upload.wikimedia.org/wikipedia/commons/a/a4/Xacti-AC8EX-Sample_video-001.ogv",
      from: 0,
      to: 4
    }, {
      from: 10,
      to: 20,
      src: "https://mirrors.creativecommons.org/movingimages/webm/ScienceCommonsJesseDylan_240p.webm#t=10,20"
    }, {
      from: 55,
      to: 60,
      src: "https://nickdesaulniers.github.io/netfix/demo/frag_bunny.mp4"
    }, {
      from: 0,
      to: 5,
      src: "https://raw.githubusercontent.com/w3c/web-platform-tests/master/media-source/mp4/test.mp4"
    }, {
      from: 0,
      to: 5,
      src: "https://commondatastorage.googleapis.com/gtv-videos-bucket/sample/ForBiggerBlazes.mp4"
    }, {
      from: 0,
      to: 5,
      src: "https://commondatastorage.googleapis.com/gtv-videos-bucket/sample/ForBiggerJoyrides.mp4"
    }, {
      from: 0,
      to: 6,
      src: "https://commondatastorage.googleapis.com/gtv-videos-bucket/sample/ForBiggerMeltdowns.mp4#t=0,6"
    }].map(async({
      from,
      to,
      src
    }) => {
      try {
        const request = await fetch(src);
        const blob = await request.blob();
        const blobURL = URL.createObjectURL(blob);
        blobURLS.push(blobURL);
        return `${blobURL}#t=${from},${to}`;
      } catch (e) {
        throw e;;
      }
    }));


    const playlist = document.getElementById("playlist");
    playlist.width = width;
    playlist.height = height;

    const video = document.getElementById("video");
    video.width = width;
    video.height = height;

    const canvas = document.createElement("canvas");
    const ctx = canvas.getContext("2d");
    canvas.width = width;
    canvas.height = height;

    let recorder;
    let resolveResult;
    const promiseResult = new Promise(resolve => resolveResult = resolve);

    document.getElementById("click")
      .onclick = e =>
      (async() => {
        try {
          // create MediaStream, audio and video MediaStreamTrack
          const audioContext = new AudioContext();
          const audioContextDestination = audioContext.createMediaStreamDestination();
          let mediaStream = audioContextDestination.stream;
          const [audioTrack] = mediaStream.getAudioTracks();
          const [videoTrack] = canvas.captureStream().getVideoTracks();
          // apply same constraints
          videoTrack.applyConstraints(videoConstraints);
          mediaStream.addTrack(videoTrack);

          console.log("initial MediaStream, audio and video MediaStreamTracks", mediaStream, mediaStream.getTracks());

          let tracks = 0;

          const fromLocalPeerConnection = new RTCPeerConnection();
          const toLocalPeerConnection = new RTCPeerConnection();

          fromLocalPeerConnection.addEventListener("icecandidate", async e => {
            console.log("from", e);
            try {
              await toLocalPeerConnection.addIceCandidate(e.candidate ? e.candidate : null);
            } catch (e) {
              console.error(e);
            }
          }, {
            once: true
          });

          toLocalPeerConnection.addEventListener("icecandidate", async e => {
            console.log("to", e);
            try {
              await fromLocalPeerConnection.addIceCandidate(e.candidate ? e.candidate : null);
            } catch (e) {
              console.error(e);
            }
          }, {
            once: true
          });
          fromLocalPeerConnection.addEventListener("negotiationneeded", e => {
            console.log(e);
          });
          toLocalPeerConnection.addEventListener("negotiationneeded", e => {
            console.log(e);
          });
          const mediaStreamTrackPromise = new Promise(resolve => {
            toLocalPeerConnection.addEventListener("track", track => {
              console.log("track event", track);
              const {
                streams: [stream]
              } = track;
              console.log(tracks, stream.getTracks().length);
              // Wait for both "track" events
              if (typeof tracks === "number" && ++tracks === 2) {
                console.log(stream);
                // Reassign stream to initial MediaStream reference;
                // have only been able so far to get this working 
                // within "track" event referencing the stream property of the event
                mediaStream = stream;
                // set video srcObject to reassigned MediaStream
                video.srcObject = mediaStream;
                tracks = void 0;
                let result;
                recorder = new MediaRecorder(stream, {
                  mimeType: "video/webm;codecs=vp8,opus",
                  audioBitsPerSecond: 128000,
                  videoBitsPerSecond: 2500000
                });
                recorder.addEventListener("start", e => {
                  console.log(e);
                });
                recorder.addEventListener("stop", e => {
                  console.log(e);
                  resolveResult(result);
                });
                recorder.addEventListener("dataavailable", e => {
                  console.log(e);
                  result = e.data;
                });
                recorder.start();
                resolve();
              }
            });
          });
          // Add initial audio and video MediaStreamTrack to PeerConnection, pass initial MediaStream
          const audioSender = fromLocalPeerConnection.addTrack(audioTrack, mediaStream);
          const videoSender = fromLocalPeerConnection.addTrack(videoTrack, mediaStream);
          const offer = await fromLocalPeerConnection.createOffer();
          await toLocalPeerConnection.setRemoteDescription(offer);
          await fromLocalPeerConnection.setLocalDescription(toLocalPeerConnection.remoteDescription);
          const answer = await toLocalPeerConnection.createAnswer();
          await fromLocalPeerConnection.setRemoteDescription(answer);
          await toLocalPeerConnection.setLocalDescription(fromLocalPeerConnection.remoteDescription);
          const media = await urls;
          await mediaStreamTrackPromise;

          console.log(audioSender, videoSender, mediaStream);

          for (const blobURL of media) {
            await new Promise(async resolve => {
              playlist.addEventListener("canplay", async e => {
                console.log(e);
                await playlist.play();
                const stream = captureStream(playlist);
                const [playlistVideoTrack] = stream.getVideoTracks();
                const [playlistAudioTrack] = stream.getAudioTracks();
                // Apply same constraints on each video MediaStreamTrack
                playlistVideoTrack.applyConstraints(videoConstraints);
                // Replace audio and video MediaStreamTrack with a new media resource
                await videoSender.replaceTrack(playlistVideoTrack);
                await audioSender.replaceTrack(playlistAudioTrack);

                console.log(recorder.state, recorder.stream.getTracks());
              }, {
                once: true
              });

              playlist.addEventListener("pause", async e => {
                // await audioSender.replaceTrack(audioTrack);
                // await videoSender.replaceTrack(videoTrack);
                resolve();
              }, {
                once: true
              });
              playlist.src = blobURL;
            });
          }
          recorder.stop();
          blobURLS.forEach(blobURL => URL.revokeObjectURL(blobURL));
          mediaStream.getTracks().forEach(track => track.stop());
          [audioTrack, videoTrack].forEach(track => track.stop());
          fromLocalPeerConnection.close();
          toLocalPeerConnection.close();
          return await promiseResult;
        } catch (e) {
          throw e;
        }
      })()
      .then(blob => {
        console.log(blob);
        video.remove();
        playlist.remove();
        const videoStream = document.createElement("video");
        videoStream.width = width;
        videoStream.height = height;
        videoStream.controls = true;
        document.body.appendChild(videoStream);
        videoStream.src = URL.createObjectURL(blob);
      })
      .catch(console.error);
  </script>
</body>

</html>
@guest271314

This comment has been minimized.

Copy link
Author

commented Apr 7, 2019

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
2 participants
You can’t perform that action at this time.