Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Change video/audio source on Chrome/Firefox #367

Closed
postacik opened this issue Oct 12, 2020 · 23 comments
Closed

Change video/audio source on Chrome/Firefox #367

postacik opened this issue Oct 12, 2020 · 23 comments

Comments

@postacik
Copy link

Hi,
I can switch between front and back cameras using the following function on mobile:

stream.getVideoTracks()[0].switchCamera();

Can it be possible to iterate through all available cameras on Chrome (web) using the same function or an additional function?

Or can you add a function to get a list of available cameras (and also available audio input sources) and select one of them using an index?

On Chrome WebRTC, one can select from available video and audio sources even during a conversation.

But on Firefox, the user chooses the video/audio source before the conversation starts and it's not possible to switch to another source afterwards.

I'm not sure how this difference can be addressed.

PS: My app works fine on the latest versions of Chrome and Firefox but not on Safari. Do you have any plans to support Safari?

@cloudwebrtc
Copy link
Member

@postacik Hi, on the web platform, we may need to use the RtpSender.replaceTrack method.

@postacik
Copy link
Author

postacik commented Oct 12, 2020

Hi,

I made a test on the following WebRTC sample page:

Sample page

Goto the page and click "Start" + "Call".

Then open the DevTools console and paste the following code:

var deviceId = 0;
async function switchCamera() {
    //get all video devices
    var videoDevices = (await navigator.mediaDevices.enumerateDevices()).filter((device) => device.kind === "videoinput");
    console.log(videoDevices);
    deviceId++;
    if (deviceId >= videoDevices.length) deviceId = 0;
    var constraints = {
        audio: true, //{ deviceId: { exact: localVideo.srcObject.getAudioTracks()[0].id } },
        video: { deviceId: { exact: videoDevices[deviceId].deviceId } }
    };
    var newStream = await navigator.mediaDevices.getUserMedia(constraints);
    localVideo.srcObject = newStream;
    var videoSender = pc1.getSenders().filter(sender => sender.track.kind === "video")[0];
    videoSender.replaceTrack(newStream.getVideoTracks()[0]);
}

After this each time you call switchCamera() the video element source will change to the next camera and also the sender's track will be replaced.

Would this be enough to implement this feature in the Dart code?

It works on the latest versions of both Chrome and Firefox.

I could not fully understand the web implementation to create a PR, sorry.

PS: You have to have more than one camera on your PC for this to work. I use the "ManyCam" application to add a virtual camera to my Windows 10 for testing.

@postacik
Copy link
Author

x

@cloudwebrtc
Copy link
Member

cloudwebrtc commented Oct 12, 2020

Then open the DevTools console and paste the following code:

var deviceId = 0;
async function switchCamera() {
    //get all video devices
    var videoDevices = (await navigator.mediaDevices.enumerateDevices()).filter((device) => device.kind === "videoinput");
    console.log(videoDevices);
    deviceId++;
    if (deviceId >= videoDevices.length) deviceId = 0;
    var constraints = {
        audio: true, //{ deviceId: { exact: localVideo.srcObject.getAudioTracks()[0].id } },
        video: { deviceId: { exact: videoDevices[deviceId].deviceId } }
    };
    var newStream = await navigator.mediaDevices.getUserMedia(constraints);
    localVideo.srcObject = newStream;
    var videoSender = pc1.getSenders().filter(sender => sender.track.kind === "video")[0];
    videoSender.replaceTrack(newStream.getVideoTracks()[0]);
}

@postacik Cool, I think this is the correct code.
your code is very similar, it should work on flutter web without too much modification.

@postacik
Copy link
Author

I'm not sure about one thing in this piece of code.
For this to work properly, audio source must stay the same as before but only the video source must change.

var constraints = {
        audio: true, //{ deviceId: { exact: localVideo.srcObject.getAudioTracks()[0].id } },
        video: { deviceId: { exact: videoDevices[deviceId].deviceId } }
    };

However, I was not able to set the deviceId for the audio part (it raised an error) so I commented out the property which sets the device id the same as the current one and just used the "true" value which I assume chooses the default audio device.

@wer-mathurin
Copy link
Contributor

@cloudwebrtc
This will need to be think a little bit because the RTCRtpSender.replaceTrack() does not exist in the current html_dartjs implementation ! This can be a use case to use the @js library.

But there is other considerations as of today MediaStreamTrackWeb is where the switchCamera API is currently implemented.
And there is no reference to RTCVideoRenderer / RTCPeerConnection in this class. See the bellow code to see what I mean, I add comments in the code.

@postacik here is what the code look like if we transform it to DART

var _deviveCount = 0;
  @override
  Future<bool> switchCamera() async {
    //get all video devices
    final mediaDevices = html.window.navigator.mediaDevices;
    var devices = await mediaDevices.enumerateDevices();
    var videoDevices = devices.where((device) => device.kind == 'videoinput');

    _deviveCount++;
    if (_deviveCount >= videoDevices.length) _deviveCount = 0;

    var constraints = {
      'audio': true,
      'video': {
        'deviceId': {'exact': videoDevices.elementAt(_deviveCount).deviceId}
      }
    };

    var newStream = await navigator.mediaDevices.getUserMedia(constraints);

    //Local video renderer is not available here
    localVideo.srcObject = newStream;
    var videoSender = _jsRtcPeerConnection
        .getSenders()
        .where((sender) => sender.track.kind == 'video')
        .first;

    //videoSender does not implement the replaceTrack
    videoSender.replaceTrack(newStream.getVideoTracks()[0]);
    
    return false;
  }

@wer-mathurin
Copy link
Contributor

Also created a ticket here for add support in the "native" library:
dart-lang/sdk#43775

@postacik
Copy link
Author

In my app, I can re-negotiate with the WebRTC server using a new stream so switching between available cams on the fly is not the only solution to switch camera.

As far as I can understand from the lib\src\web\mediadevices_impl.dart file, I can get available audio and video sources using the getSources() function and then create a mediaConstraints map like I did in the Javascript sample and call getUserMedia() function to get the stream of the camera I want.

Can I do that?

@postacik
Copy link
Author

Well, I tried it as in the following function:

  Future<MediaStream> _createLocalStream(int deviceIndex) async {
    final Map<String, dynamic> mediaConstraints = {
      'audio': true,
      'video': {
        'mandatory': {'minWidth': '640', 'minHeight': '360', 'minFrameRate': '25'},
        'facingMode': 'user', //or 'user'
        'optional': [],
      }
    };
    var videoDevices = (await MediaDevices.getSources()).where((device) => device["kind"] == 'videoinput').toList();
    if (videoDevices.isNotEmpty) {
      mediaConstraints["video"]["deviceId"] = videoDevices[deviceIndex]["deviceId"];
    }
    MediaStream stream = await MediaDevices.getUserMedia(mediaConstraints);
    return stream;
  }

I have two cameras on my system. However the following two statements both set the first cam to the stream (index 0).

      mediaConstraints["video"]["deviceId"] = videoDevices[0]["deviceId"];
      mediaConstraints["video"]["deviceId"] = videoDevices[1]["deviceId"];

Do you have any idea what I might be doing wrong?

@postacik
Copy link
Author

And I verified that the following Javascript version of the sample code runs perfectly well and sets the camera I want:

  try {
    var videoDevices = (await navigator.mediaDevices.enumerateDevices()).filter((device) => device.kind === "videoinput");
    var constraints = {
      audio: true, 
      video: { deviceId: { exact: videoDevices[1].deviceId } }
  };
    const stream = await navigator.mediaDevices.getUserMedia(constraints);
    console.log('Received local stream');
    localVideo.srcObject = stream;
    localStream = stream;
  } catch (e) {
    alert(`getUserMedia() error: ${e.name}`);
  }

@wer-mathurin
Copy link
Contributor

wer-mathurin commented Oct 15, 2020 via email

@postacik
Copy link
Author

I've tried both of the following:

      mediaConstraints["video"] = {"deviceId": videoDevices[1]["deviceId"]};
      mediaConstraints["video"] = {
        "deviceId": {"exact": videoDevices[1]["deviceId"]}
      };

@wer-mathurin
Copy link
Contributor

wer-mathurin commented Oct 15, 2020 via email

@postacik
Copy link
Author

I'm doing all my tests on Chrome.

@postacik
Copy link
Author

The Javascript code works fine but flutter code sets only the first cam.

@postacik
Copy link
Author

I also tested the same on flutter-webrtc-demo.

I changed the following function in signaling.dart.

  Future<MediaStream> createStream(media, user_screen) async {
    final Map<String, dynamic> mediaConstraints = {
      'audio': true,
      'video': {
        'mandatory': {
          'minWidth':
              '640', // Provide your own width, height and frame rate here
          'minHeight': '480',
          'minFrameRate': '30',
        },
        'facingMode': 'user',
        'optional': [],
      }
    };

    var videoDevices = (await MediaDevices.getSources())
        .where((device) => device["kind"] == 'videoinput')
        .toList();
    print(videoDevices);
    if (videoDevices.isNotEmpty) {
      mediaConstraints["video"] = {
        "deviceId": {"exact": videoDevices[1]["deviceId"]}
      };
    }
    print(mediaConstraints);

    MediaStream stream = user_screen
        ? await MediaDevices.getDisplayMedia(mediaConstraints)
        : await MediaDevices.getUserMedia(mediaConstraints);
    if (this.onLocalStream != null) {
      this.onLocalStream(stream);
    }
    return stream;
  }

It also uses the cam at index 0, not at index 1 as I set in the mediaConstraints.

@postacik
Copy link
Author

Can the following question be related to this?

https://stackoverflow.com/questions/61161135/adding-support-for-navigator-mediadevices-getusermedia-to-dart

getUserMedia is defined in html_dart2js.dart as below:

  Future<MediaStream> getUserMedia([Map? constraints]) {
    var constraints_dict = null;
    if (constraints != null) {
      constraints_dict = convertDartToNative_Dictionary(constraints);
    }
    return promiseToFuture<MediaStream>(
        JS("", "#.getUserMedia(#)", this, constraints_dict));
  }

Can this implementation not be calling the same function we use in the following JavaScript statement:

navigator.mediaDevices.getUserMedia(constraints);

@wer-mathurin
Copy link
Contributor

wer-mathurin commented Oct 15, 2020

I was able to reproduce the problem.

After couple of testing here what I found so far with Chrome.

There is many strange things happening behind the scene:

  1. Before authorizing access to a camera....
    The first time getSource() is call it only find the default camera as defined in Chrome and the deviceId is not populated when there is only one camera retrieved. So this is a problem with the enumeratedDevices() => html_dart2js.dart

  2. After the authorization have been granted
    If you call getSource() all the cameras will be retrieved and the deviceId is populated correctly...BUT

When you pass the constraint to the getUserMedia(), it always open the default camera and do not considerer the deviceId pass in the constraints. To test this you can change the default camera in Chrome and you will see that It will open the default one.

This is not a problem in the implementation in this library, but in the html_dart2js.dart implementation.

So I will fill two bugs with the dart team! I will update this issue after.

I the mean time, if someone familiar with JS library can try to post a workaround.

Related to : dart-lang/sdk#43801
Related to: dart-lang/sdk#43802

@postacik
Copy link
Author

Thanks for testing and all your efforts. 👍

@postacik
Copy link
Author

postacik commented Oct 15, 2020

I think I found the reason of the error.

html.window.navigator.mediaDevices.getUserMedia(mediaConstraints) function runs internally like this:

  Future<MediaStream> getUserMedia([Map? constraints]) {
    var constraints_dict = null;
    if (constraints != null) {
      constraints_dict = convertDartToNative_Dictionary(constraints);
    }
    return promiseToFuture<MediaStream>(
        JS("", "#.getUserMedia(#)", this, constraints_dict));
  }

constraints parameter is converted to JS object using convertDartToNative_Dictionary which does not work recursive as you can see in its implementation.

/// Converts a flat Dart map into a JavaScript object with properties.
convertDartToNative_Dictionary(Map? dict, [void postCreate(Object? f)?]) {
  if (dict == null) return null;
  var object = JS('var', '{}');
  if (postCreate != null) {
    postCreate(object);
  }
  dict.forEach((key, value) {
    JS('void', '#[#] = #', object, key, value);
  });
  return object;
}

So the function ignores all the sub items in the "video" parameter.

If you can implement the getUserMedia function as you did getDisplayMedia function, I think it would work.

      final mediaDevices = html.window.navigator.mediaDevices;
      if (jsutil.hasProperty(mediaDevices, 'getUserMedia')) {
        final arg = JsObject.jsify(mediaConstraints);

        final jsStream = await jsutil.promiseToFuture<html.MediaStream>(
            jsutil.callMethod(mediaDevices, 'getUserMedia', [arg]));
        return MediaStream(jsStream, 'local');

Can you do it, or shall I create a PR?

@wer-mathurin
Copy link
Contributor

wer-mathurin commented Oct 15, 2020 via email

@wer-mathurin wer-mathurin mentioned this issue Oct 15, 2020
@wer-mathurin
Copy link
Contributor

Try it out, I merged it on master. But still not resolve the problem with the list of devices before authorization is granted.
I tried quickly to do it by our self, like this:

final mediaDevices= await html.window.navigator.mediaDevices;
if (jsutil.hasProperty(mediaDevices, 'enumerateDevices')) {

        final devices = await jsutil.promiseToFuture<List<dynamic>>(
            jsutil.callMethod(mediaDevices, 'enumerateDevices', []));
}

But experience the same behaviors.
@postacik : I propose to open another ticket to track it in is own issue and close this one.

@postacik
Copy link
Author

I think enumerateDevices issue is the expected behaviour of the browser so it's got nothing to do with the dart implementation.

Thank you very much, closing this one. 👍

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants