Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to install and stream data when getting started #71

Open
sammy-w opened this issue Sep 21, 2023 · 5 comments
Open

How to install and stream data when getting started #71

sammy-w opened this issue Sep 21, 2023 · 5 comments

Comments

@sammy-w
Copy link

sammy-w commented Sep 21, 2023

I'm just getting started with LSL to setup a lab with eyetracking and GSR.
I'm trying out several things but I'm stuck and I have no idea how to proceed.
When opening a stream using StreamOutlet(), I get an error: "could not bind multicast responder for ff02 .... to interface ::1".
I don't know what this means. Could it have to do with the network setting, or do I have to configure something?
Any help would be greatly appreciated.

@chkothe
Copy link
Member

chkothe commented Sep 21, 2023

Hi, this is actually a benign log. LSL uses multiple mechanisms that allow streams to be visible between machines on your network, and multicast is the one that is often unsupported by the network/router - nothing to worry about there. It is in principle possible to get rid of the message using a config file, but most people just ignore it.

@sammy-w
Copy link
Author

sammy-w commented Sep 25, 2023

Thank you for the explanation, it works now. I can start the eyetracker data streaming in one command window and then use the labrecorder to save the data to xdf. For my understanding: can I start multiple streams (i.e., shimmer GSR and webcam) and record them together in labrecorder? Do you have suggestions on how to tackle this i.e., which apps to use?
Thanks again

@chkothe
Copy link
Member

chkothe commented Sep 25, 2023

Correct, that's the normal way to use LSL. The LabRecorder is multi-modal and will record any time series that comes in over LSL, regardless of what kind of device it is (as you've probably seen, each stream usually has its own meta-data specific to the content type (EEG, MoCap, etc) to allow for subsequent interpretation, provided that the sender app specifies it in sufficient detail). As for which apps, you'll have to review what's available for your devices. Some lists are here and here, but we keep finding things out on the internet that's been developed by the community and that's not (yet) catalogued in these lists, so a google search is always worth it. The last resorts are usually a) to ask the vendor if they have LSL support and if not, why not and b) to roll your own and hopefully contribute it back even if it's a bit rough (someone may at some point take over maintainership). As for data analysis, you can find XDF importers for various languages and there are also some analysis software packages out there that can natively import it (e.g., EEGLAB, NeuroPype, etc).

@sammy-w
Copy link
Author

sammy-w commented Oct 9, 2023

Thank you for the explanation, I managed to get it working to record data with an Eye-Tracker (eyetechLSL) and a GSR device (pyshimmer) together with event markers. The final piece of the puzzle would be the webcam to record videos, but I couldn't find an app for that yet. The idea would be to send raw videodata to LSL do you know if that would be possible or how I should approach this?

@cboulay
Copy link
Contributor

cboulay commented Oct 10, 2023

Webcams have always been hard for LSL. There are many conversations about it in this labstreaminglayer github org and in the sccn org in the liblsl repo.

One major problem is that webcams drop frames and they don't tell you that they are dropping frames.
Another is that raw video is incredibly inefficient and LSL is an awful way to store video. A basic low-res stream at 1024 x 768 x 3 colours @ 30 FPS = 70 MB per second.

So what you should do is record video in parallel to a separate file using a good video encoder, and then send to LSL the frame counts (+ LSL timestamp), but somehow compensate for dropped frames. That last part isn't always easy.

But, if you search around, you will find other solutions, some of which may work for you. For example,

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants