New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

WIP real-time base client #4110

Closed
wants to merge 7 commits into
from

Conversation

Projects
None yet
9 participants
@teonbrooks
Member

teonbrooks commented Mar 28, 2017

TODOs

  • Add example for LSL client (using simulation)
  • Add test for _BaseClient
  • Add tests for LSL client
@jasmainak

This comment has been minimized.

Show comment
Hide comment
@jasmainak

jasmainak Mar 29, 2017

Member

@teonbrooks have you looked at this by @alexandrebarachant : https://github.com/alexandrebarachant/muse-lsl ? Let's try to finish the refactoring tomorrow morning and then go for the openbci client later in the day ...

Member

jasmainak commented Mar 29, 2017

@teonbrooks have you looked at this by @alexandrebarachant : https://github.com/alexandrebarachant/muse-lsl ? Let's try to finish the refactoring tomorrow morning and then go for the openbci client later in the day ...

Show outdated Hide outdated mne/realtime/client.py
# Martin Luessi <mluessi@nmr.mgh.harvard.edu>
# Matti Hamalainen <msh@nmr.mgh.harvard.edu>
# Authors: Teon Brooks <teon.brooks@gmail.com>
# Mainak Jas <mainak@neuro.hut.fi>

This comment has been minimized.

@jasmainak

jasmainak Mar 29, 2017

Member

I think the diff is really unreadable if we rename the files. I'd prefer keeping the filenames as is for now ...

@jasmainak

jasmainak Mar 29, 2017

Member

I think the diff is really unreadable if we rename the files. I'd prefer keeping the filenames as is for now ...

@codecov-io

This comment has been minimized.

Show comment
Hide comment
@codecov-io

codecov-io Mar 29, 2017

Codecov Report

Merging #4110 into master will decrease coverage by 0.23%.
The diff coverage is 31.57%.

@@            Coverage Diff             @@
##           master    #4110      +/-   ##
==========================================
- Coverage   86.18%   85.95%   -0.24%     
==========================================
  Files         354      355       +1     
  Lines       63729    63755      +26     
  Branches     9709     9709              
==========================================
- Hits        54927    54801     -126     
- Misses       6127     6272     +145     
- Partials     2675     2682       +7

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update a092868...fe46b97. Read the comment docs.

Codecov Report

Merging #4110 into master will decrease coverage by 0.23%.
The diff coverage is 31.57%.

@@            Coverage Diff             @@
##           master    #4110      +/-   ##
==========================================
- Coverage   86.18%   85.95%   -0.24%     
==========================================
  Files         354      355       +1     
  Lines       63729    63755      +26     
  Branches     9709     9709              
==========================================
- Hits        54927    54801     -126     
- Misses       6127     6272     +145     
- Partials     2675     2682       +7

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update a092868...fe46b97. Read the comment docs.

@rodrigohubner

This comment has been minimized.

Show comment
Hide comment
@rodrigohubner

rodrigohubner Apr 2, 2017

Contributor

I'll try to participate in this WIP (+1).

Contributor

rodrigohubner commented Apr 2, 2017

I'll try to participate in this WIP (+1).

@jasmainak

This comment has been minimized.

Show comment
Hide comment
@jasmainak

jasmainak Apr 2, 2017

Member

Great! Feel free to review it, pull the changes and try them on your computer

Member

jasmainak commented Apr 2, 2017

Great! Feel free to review it, pull the changes and try them on your computer

@rodrigohubner

Two doubts:

  • For me to get the whole project should I pull in Teon's repository?
  • How would I contribute to commit to this same PR?
Show outdated Hide outdated mne/realtime/base_client.py
@@ -3,13 +3,13 @@
#
# License: BSD (3-clause)
import copy
import threading

This comment has been minimized.

@rodrigohubner

rodrigohubner Apr 6, 2017

Contributor

I think it is better to work with multiprocessing because of known problems with multithreading in python (GIL for example).

@rodrigohubner

rodrigohubner Apr 6, 2017

Contributor

I think it is better to work with multiprocessing because of known problems with multithreading in python (GIL for example).

This comment has been minimized.

@teonbrooks

teonbrooks Apr 6, 2017

Member

if you have a better solution, i welcome it. here we are just extracting away from our existing code and wanted to have a lightweight base class object that would be work across our existing clients. I wasn't aware of the problems with multithreading

@teonbrooks

teonbrooks Apr 6, 2017

Member

if you have a better solution, i welcome it. here we are just extracting away from our existing code and wanted to have a lightweight base class object that would be work across our existing clients. I wasn't aware of the problems with multithreading

This comment has been minimized.

@alexandrebarachant

alexandrebarachant Apr 6, 2017

Collaborator

I used thread for similar function and never had any issues. Yes python has problem with the GIL, but that does not mean thread should never be used. What you want to avoid is having a thread that never release the GIL (i.e. continuously run). For example, when using LSL.pull_sample() or a socket.recv, those function will block until you receive data but this is made with a call to a C extension or to a shared library that will effectively release the GIL during the time it wait.

@alexandrebarachant

alexandrebarachant Apr 6, 2017

Collaborator

I used thread for similar function and never had any issues. Yes python has problem with the GIL, but that does not mean thread should never be used. What you want to avoid is having a thread that never release the GIL (i.e. continuously run). For example, when using LSL.pull_sample() or a socket.recv, those function will block until you receive data but this is made with a call to a C extension or to a shared library that will effectively release the GIL during the time it wait.

This comment has been minimized.

@jasmainak

jasmainak Apr 6, 2017

Member

Thanks @alexandrebarachant . I'd say let's stick to threading for now, get the API right, add an example, so people can pull and test. Then we can smooth out other details. Make it work, then make it nice :)

@jasmainak

jasmainak Apr 6, 2017

Member

Thanks @alexandrebarachant . I'd say let's stick to threading for now, get the API right, add an example, so people can pull and test. Then we can smooth out other details. Make it work, then make it nice :)

@jaeilepp

This comment has been minimized.

Show comment
Hide comment
@jaeilepp

jaeilepp Apr 6, 2017

Contributor

One option would be to take over and make another PR with changes on the top of the ones made here (pull this branch, make changes, make another PR).
Another option would be to push to this branch directly, but that would require push rights from @teonbrooks .

Contributor

jaeilepp commented Apr 6, 2017

One option would be to take over and make another PR with changes on the top of the ones made here (pull this branch, make changes, make another PR).
Another option would be to push to this branch directly, but that would require push rights from @teonbrooks .

@teonbrooks

This comment has been minimized.

Show comment
Hide comment
@teonbrooks

teonbrooks Apr 6, 2017

Member

@rodrigohubner I can give you write privs to my branch if you would like, let me know.

for the LSLClient we are working on, I know that @aj-ptw has been working on making a compatible lsl interface with the npm module he's worked on.
also @alexandrebarachant has made an interface for the MUSE via LSL.
I think this should likely be our course of actions for realtime clients, relying on unified client and have devs work to their hardware compatible with those adopted standards (e.g. LSL, OSC, Fieldtrip client)

Member

teonbrooks commented Apr 6, 2017

@rodrigohubner I can give you write privs to my branch if you would like, let me know.

for the LSLClient we are working on, I know that @aj-ptw has been working on making a compatible lsl interface with the npm module he's worked on.
also @alexandrebarachant has made an interface for the MUSE via LSL.
I think this should likely be our course of actions for realtime clients, relying on unified client and have devs work to their hardware compatible with those adopted standards (e.g. LSL, OSC, Fieldtrip client)

@larsoner

This comment has been minimized.

Show comment
Hide comment
@larsoner

larsoner Apr 6, 2017

Member

One option would be to take over and make another PR with changes on the top of the ones made here (pull this branch, make changes, make another PR).
Another option would be to push to this branch directly, but that would require push rights from @teonbrooks .

Keep in mind that PRs can be made from any branch to any other branch -- we juts usually make them from user/branchname into mne-tools/master. But you can just as easily make a PR from rodrigohubner/rtclient-fixes into teonbrooks/rtclient. This doesn't require push access or taking over.

Member

larsoner commented Apr 6, 2017

One option would be to take over and make another PR with changes on the top of the ones made here (pull this branch, make changes, make another PR).
Another option would be to push to this branch directly, but that would require push rights from @teonbrooks .

Keep in mind that PRs can be made from any branch to any other branch -- we juts usually make them from user/branchname into mne-tools/master. But you can just as easily make a PR from rodrigohubner/rtclient-fixes into teonbrooks/rtclient. This doesn't require push access or taking over.

@teonbrooks

This comment has been minimized.

Show comment
Hide comment
@teonbrooks

teonbrooks Apr 6, 2017

Member

Keep in mind that PRs can be made from any branch to any other branch

yeah, I agree, PR to my branch would work, and would be less hassle.

Member

teonbrooks commented Apr 6, 2017

Keep in mind that PRs can be made from any branch to any other branch

yeah, I agree, PR to my branch would work, and would be less hassle.

@jasmainak

This comment has been minimized.

Show comment
Hide comment
@jasmainak

jasmainak Apr 6, 2017

Member

@teonbrooks how close are we here?

I am going to add a TODO at the top of the PR so we don't lose focus

Member

jasmainak commented Apr 6, 2017

@teonbrooks how close are we here?

I am going to add a TODO at the top of the PR so we don't lose focus

Show outdated Hide outdated mne/realtime/base_client.py
from mne.io.meas import create_info
def _buffer_recv_worker(client):

This comment has been minimized.

@jasmainak

jasmainak Apr 6, 2017

Member

it's the same for all clients, no? can we put it in one file and import it?

@jasmainak

jasmainak Apr 6, 2017

Member

it's the same for all clients, no? can we put it in one file and import it?

pass
def _enter_extra():
"""For system-specific loading and initializing during the enter

This comment has been minimized.

@jasmainak

jasmainak Apr 6, 2017

Member

system-specific -> client-specific

@jasmainak

jasmainak Apr 6, 2017

Member

system-specific -> client-specific

"""
inlet = pylsl.StreamInlet(self.client)
## add tmin and tmax to this logic

This comment has been minimized.

@jasmainak

jasmainak Apr 6, 2017

Member

still todo!

@jasmainak

jasmainak Apr 6, 2017

Member

still todo!

## add tmin and tmax to this logic
while True:
samples, timestamps = inlet.pull_chunk(max_samples=self.buffer_size)

This comment has been minimized.

@jasmainak

jasmainak Apr 6, 2017

Member

samples, times to conform to MNE nomenclature?

@jasmainak

jasmainak Apr 6, 2017

Member

samples, times to conform to MNE nomenclature?

return self
def create_info(self):

This comment has been minimized.

@jasmainak

jasmainak Apr 6, 2017

Member

Missing docstring

@jasmainak

jasmainak Apr 6, 2017

Member

Missing docstring

self.buffer_size = buffer_size
self.verbose = verbose
def connect(self):

This comment has been minimized.

@jasmainak

jasmainak Apr 6, 2017

Member

missing docstring

@jasmainak

jasmainak Apr 6, 2017

Member

missing docstring

@jasmainak

This comment has been minimized.

Show comment
Hide comment
@jasmainak

jasmainak Apr 6, 2017

Member

@teonbrooks let's try to finish it while the momentum is still strong :)

Member

jasmainak commented Apr 6, 2017

@teonbrooks let's try to finish it while the momentum is still strong :)

@andrewjaykeller

This comment has been minimized.

Show comment
Hide comment
@teonbrooks

This comment has been minimized.

Show comment
Hide comment
@teonbrooks

teonbrooks Apr 7, 2017

Member

hey guys, agreed. I was flying yesterday back to SF so I didn't see any of this. I will address these comments this weekend :) also @aj-ptw, is it straightforward to establish a LSL stream with the npm package? maybe we can chat tomorrow to make discuss what it would take in a simple example

Member

teonbrooks commented Apr 7, 2017

hey guys, agreed. I was flying yesterday back to SF so I didn't see any of this. I will address these comments this weekend :) also @aj-ptw, is it straightforward to establish a LSL stream with the npm package? maybe we can chat tomorrow to make discuss what it would take in a simple example

@codyworld

This comment has been minimized.

Show comment
Hide comment
@codyworld

codyworld Apr 20, 2017

Hi all, I have questions. I am able to stream EEG data from OpenBCI Cyton V3 hardware into MATLAB environment using BCILAB via LSL and I would like to polish the incoming data in real-time prior component analysis. I know BCILAB has the capability to perform data cleaning using filters like Clean_rawdata, PREP Pipeline, ASR, Infomax etc. Does anyone have any idea, how to perform those process online?

Hi all, I have questions. I am able to stream EEG data from OpenBCI Cyton V3 hardware into MATLAB environment using BCILAB via LSL and I would like to polish the incoming data in real-time prior component analysis. I know BCILAB has the capability to perform data cleaning using filters like Clean_rawdata, PREP Pipeline, ASR, Infomax etc. Does anyone have any idea, how to perform those process online?

@jasmainak

This comment has been minimized.

Show comment
Hide comment
@jasmainak

jasmainak Apr 20, 2017

Member

I would like to polish the incoming data in real-time prior component analysis. I know BCILAB has the capability to perform data cleaning using filters like Clean_rawdata, PREP Pipeline, ASR, Infomax etc. Does anyone have any idea, how to perform those process online?

You can probably use autoreject with the fit and transform API. The fit would be run offline and the transform would be run online. Depends how bad your data is. But if you're working with Matlab, I'm not sure how you could make it work. Maybe you can try calling Python from Matlab like here

Member

jasmainak commented Apr 20, 2017

I would like to polish the incoming data in real-time prior component analysis. I know BCILAB has the capability to perform data cleaning using filters like Clean_rawdata, PREP Pipeline, ASR, Infomax etc. Does anyone have any idea, how to perform those process online?

You can probably use autoreject with the fit and transform API. The fit would be run offline and the transform would be run online. Depends how bad your data is. But if you're working with Matlab, I'm not sure how you could make it work. Maybe you can try calling Python from Matlab like here

@larsoner

This comment has been minimized.

Show comment
Hide comment
@larsoner

larsoner Aug 11, 2017

Member

@teonbrooks @jasmainak what is the plan here? Any time to work on it?

Member

larsoner commented Aug 11, 2017

@teonbrooks @jasmainak what is the plan here? Any time to work on it?

@jasmainak

This comment has been minimized.

Show comment
Hide comment
@jasmainak

jasmainak Aug 11, 2017

Member

I'll be seeing @teonbrooks end of this month for the CRN coding sprint. @teonbrooks do you think we could earmark a couple of days after the sprint to work on this?

Member

jasmainak commented Aug 11, 2017

I'll be seeing @teonbrooks end of this month for the CRN coding sprint. @teonbrooks do you think we could earmark a couple of days after the sprint to work on this?

@teonbrooks

This comment has been minimized.

Show comment
Hide comment
@teonbrooks

teonbrooks Aug 17, 2017

Member

yeah, I think this can be something we can knock out while @jasmainak is in town :)

Member

teonbrooks commented Aug 17, 2017

yeah, I think this can be something we can knock out while @jasmainak is in town :)

@larsoner

This comment has been minimized.

Show comment
Hide comment
@larsoner

larsoner May 16, 2018

Member

@teonbrooks this is about a year old, still planning to work on this or should we close?

Member

larsoner commented May 16, 2018

@teonbrooks this is about a year old, still planning to work on this or should we close?

@teonbrooks

This comment has been minimized.

Show comment
Hide comment
@teonbrooks

teonbrooks May 16, 2018

Member

let's close this for now. maybe when I get a weekend free...

Member

teonbrooks commented May 16, 2018

let's close this for now. maybe when I get a weekend free...

@teonbrooks teonbrooks closed this May 16, 2018

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment