New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

timestamp problems versus other LSL streams (still in v.1.0.8.0) #4

Closed
scottwatter opened this Issue Sep 14, 2018 · 17 comments

Comments

Projects
None yet
2 participants
@scottwatter

scottwatter commented Sep 14, 2018

Hi,

I have been trying to use BlueMuse to stream EEG data, collecting Muse + eye tracker (EyeTribe) + stimulus events (Presentation, neurobs.com), all via LSL and recorded with LabRecorder.

The muse timestamps from BlueMuse have a large offset compared with timestamps from other streams. The timestamp problem reported elsewhere (see link) with BlueMuse and PsychoPy appears to still persist in recording with LabRecorder via standard LSL methods:
NeuroTechX/eeg-notebooks#10

It looks like BlueMuse is forcing sending its own timestamps that are not being adjusted on import with load_xdf in Matlab. (It looks like similar kinds of problems exist with how OpenVibe handles sending LSL timestamps.)

Integration with the broader LSL framework seems like a vital aspect of this kind of software - is there a possibility of a fix for this? (or even a switch option to select UTC versus local_clock() times to send, or similar? You posted a note to the issue above, saying v.1.0.8.0 had fixed this kind of problem with PsychoPy, and that issue was closed, but something similar to it seems to persist in LSL more generally.

I have replicated this issue:
-- with various combinations of LSL streams across a local network, and with all streams on a single local machine
-- with different versions of LabRecorder
-- with different versions of load_xdf.m (command line into Matlab, and also newer versions included as part of EEGLAB and related import toolboxes)

ALL of these have the same issue - the Muse timestamps have a very large positive offset compared to the timestamps of other streams. All the other streams agree with each other, and are handled properly even when coming from multiple different machines, etc.

Here is example info from the XDF headers for each stream:

streams{1,1}.info

ans =

struct with fields:

           name: 'Muse-97C2 (00:55:da:b0:97:c2)'
           type: 'EEG'
  channel_count: '5'
  nominal_srate: '256'
 channel_format: 'float32'
      source_id: 'LSLBridge'
        version: '1.1000000000000001'
     created_at: '71133.34790008'
            uid: '54ee4dfb-7316-47f6-baed-6c5606f87cc8'
     session_id: 'default'
       hostname: 'DESKTOP-QEFLQDP'
      v4address: [1×1 struct]
    v4data_port: '16572'
 v4service_port: '16572'
      v6address: [1×1 struct]
    v6data_port: '16572'
 v6service_port: '16572'
           desc: [1×1 struct]
  clock_offsets: [1×1 struct]
first_timestamp: '1536933128.776125'
 last_timestamp: '1536933162.888094'
   sample_count: '8736'
effective_srate: 256.0298

streams{1,2}.info

ans =

struct with fields:

           name: 'EyeTribe'
           type: 'Gaze'
  channel_count: '8'
  nominal_srate: '30'
 channel_format: 'float32'
      source_id: 'EyeTribe'
        version: '1.1000000000000001'
     created_at: '71154.542931118995'
            uid: 'd747bbe4-bf73-4baf-8ce3-f91211735779'
     session_id: 'default'
       hostname: 'DESKTOP-QEFLQDP'
      v4address: [1×1 struct]
    v4data_port: '16573'
 v4service_port: '16573'
      v6address: [1×1 struct]
    v6data_port: '16573'
 v6service_port: '16573'
           desc: [1×1 struct]
  clock_offsets: [1×1 struct]
first_timestamp: '71197.89499296001'
 last_timestamp: '71231.92108889'
   sample_count: '961'
effective_srate: 28.1936

streams{1,3}.info

ans =

struct with fields:

           name: 'Presentation'
           type: 'Markers'
  channel_count: '1'
  nominal_srate: '0'
 channel_format: 'string'
      source_id: 'Presentation on COGSCIADMIN-2'
        version: '1.1000000000000001'
     created_at: '23472.583887463999'
            uid: '7401e5d2-5695-4758-9306-8e39303010ba'
     session_id: 'default'
       hostname: 'cogsciadmin-2'
      v4address: [1×1 struct]
    v4data_port: '16572'
 v4service_port: '16572'
      v6address: [1×1 struct]
    v6data_port: '16572'
 v6service_port: '16572'
           desc: [1×1 struct]
  clock_offsets: [1×1 struct]
first_timestamp: '23511.823824388'
 last_timestamp: '23529.286555291'
   sample_count: '9'

For the info above, the Muse, EyeTribe eye tracker, and LabRecorder are running on the same machine, and Presentation is running on a second machine on the local network. The timestamp data imported into Matlab via load_xdf looks like this:

MUSE
1536935699.22454
1536935699.22844
1536935699.23235
1536935699.23626
etc… (256Hz)

EyeTribe
71197.8405030188
71197.8760090432
71197.9115150675
71197.9470210919
etc… (30Hz)

Presentation
71211.5444088639
71212.2896862197
71220.5768176043
71221.065530451
etc… (per event)

The EyeTribe and Presentation data are co-registered on the same timescale (here the first events for the eye tracker data begin a few seconds b before the first marker event from Presentation) - these events are properly aligned. The MUSE timestamps are internally consistent (sampling rate is fine etc), but the timestamps are not sensible with respect to other events recorded in LSL.

Apologies for the very long description - hopefully this is helpful in locating the issue. My best guess is that BlueMuse is using the LSL push function to push a timestamp value that is not the standard local_clock time suggested / expected by the LSL setup, and/or that this method is enforcing these timestamps to be used and not letting LSL override with default timestamps.

EDIT: Long story short, it looks like BlueMuse is still insisting on UTC Unix time as timestamps, that overrides the default LSL timestamps that can be coordinated with other streams.

Any fixes to this would be fantastic - Muse has been broken for general LSL use since it was introduced four years ago, and all the workarounds so far still don't let it work properly with general LSL setups. Fixing this standard LSL issue would be a huge help to many of us I'm sure.

many thanks - Scott

@kowalej

This comment has been minimized.

Show comment
Hide comment
@kowalej

kowalej Sep 14, 2018

Owner

I can look at adding the lsl local_clock option as a toggles feature. I had opted to use a high precision high accuracy timing API on modern windows to generate a UTC Epoch UTC-0 timestamp based on when packets come in though. You should be able to convert this for now to lsl equivalent. Check below I copied from LSL docs.

What clock does LSL use? / How do I relate LSL's local_clock() to my wall clock? LSL's local_clock() function measures the time since the local machine was started, in seconds (other system clocks usually do not have sufficient resolution for use with LSL). The correct way to map its output to the time measured by your preferred system clock is to first determine the constant offset between the two clocks, by reading them out at the same time, and then to add that offset to the result of local_clock() whenever it is needed. Also keep in mind that the time-stamps that are returned by inlet.pull_sample() will generally be local to the sender's machine -- only after you add the time offset returned by inlet.time_correction() to them you have them in your local domain.

Owner

kowalej commented Sep 14, 2018

I can look at adding the lsl local_clock option as a toggles feature. I had opted to use a high precision high accuracy timing API on modern windows to generate a UTC Epoch UTC-0 timestamp based on when packets come in though. You should be able to convert this for now to lsl equivalent. Check below I copied from LSL docs.

What clock does LSL use? / How do I relate LSL's local_clock() to my wall clock? LSL's local_clock() function measures the time since the local machine was started, in seconds (other system clocks usually do not have sufficient resolution for use with LSL). The correct way to map its output to the time measured by your preferred system clock is to first determine the constant offset between the two clocks, by reading them out at the same time, and then to add that offset to the result of local_clock() whenever it is needed. Also keep in mind that the time-stamps that are returned by inlet.pull_sample() will generally be local to the sender's machine -- only after you add the time offset returned by inlet.time_correction() to them you have them in your local domain.

@scottwatter

This comment has been minimized.

Show comment
Hide comment
@scottwatter

scottwatter Sep 14, 2018

Hi Jason - thanks for your quick reply.

A toggle option to use the standard LSL method would be extremely useful. Using UTC time like this is very accurate, I agree, but breaks the integration with the standard LSL clock alignment method. You can manually adjust timestamps, but currently collecting data together with any other LSL stream breaks the main feature that LSL is trying to solve.

I suspect a lot of LSL + muse users would appreciate this feature (which has been absent or broken in various other Muse solutions too), to finally allow Muse to be used easily with a range of other LSL hardware.

Thanks again for considering this - Scott

scottwatter commented Sep 14, 2018

Hi Jason - thanks for your quick reply.

A toggle option to use the standard LSL method would be extremely useful. Using UTC time like this is very accurate, I agree, but breaks the integration with the standard LSL clock alignment method. You can manually adjust timestamps, but currently collecting data together with any other LSL stream breaks the main feature that LSL is trying to solve.

I suspect a lot of LSL + muse users would appreciate this feature (which has been absent or broken in various other Muse solutions too), to finally allow Muse to be used easily with a range of other LSL hardware.

Thanks again for considering this - Scott

@kowalej

This comment has been minimized.

Show comment
Hide comment
@kowalej

kowalej Sep 14, 2018

Owner

Makes sense, I'll try to implement this weekend.

Owner

kowalej commented Sep 14, 2018

Makes sense, I'll try to implement this weekend.

@scottwatter

This comment has been minimized.

Show comment
Hide comment
@scottwatter

scottwatter Sep 17, 2018

Many thanks - my lab (and I am sure many others) will be thrilled to have a reliable LSL method for Muse that works directly with the general LSL framework. Thank you!

scottwatter commented Sep 17, 2018

Many thanks - my lab (and I am sure many others) will be thrilled to have a reliable LSL method for Muse that works directly with the general LSL framework. Thank you!

@kowalej

This comment has been minimized.

Show comment
Hide comment
@kowalej

kowalej Sep 17, 2018

Owner

So I basically have this working now with multiple timestamp formats, but I hit a little snag with calling the lsl local_clock using the 64-bit compilation. It seems that liblsl64.dll won't load correctly, so either there's an issue with the DLL I'm using or it won't jive with the UWP app, which is weird since the liblsl32.dll works fine (but can't be used in the 64 bit UWP build). I'm probably going to try accessing the local clock functionality directly with a C++ class lib, but if this isn't working I will let the LSLBridge process do the local_clock call, however this may result in "late" timestamps since it wouldn't occur exactly when the packets came in from Bluetooth.

Owner

kowalej commented Sep 17, 2018

So I basically have this working now with multiple timestamp formats, but I hit a little snag with calling the lsl local_clock using the 64-bit compilation. It seems that liblsl64.dll won't load correctly, so either there's an issue with the DLL I'm using or it won't jive with the UWP app, which is weird since the liblsl32.dll works fine (but can't be used in the 64 bit UWP build). I'm probably going to try accessing the local clock functionality directly with a C++ class lib, but if this isn't working I will let the LSLBridge process do the local_clock call, however this may result in "late" timestamps since it wouldn't occur exactly when the packets came in from Bluetooth.

@scottwatter

This comment has been minimized.

Show comment
Hide comment
@scottwatter

scottwatter Sep 17, 2018

Weird, but that sounds like a fair solution. Absolute timing is a bit arbitrary anyway here - there is latency in the bluetooth stack transmission etc between scalp voltage and data availability anyway. Reliable low-variability latency is key, and LSL will do a good job regressing this from data even if there is some jitter. I can't imagine it would be enough delay to matter at all; at worst a known constant can be included to adjust offline. So I think if it ends up coming via LSLBridge, I suspect that would work just fine. Thank you again!

scottwatter commented Sep 17, 2018

Weird, but that sounds like a fair solution. Absolute timing is a bit arbitrary anyway here - there is latency in the bluetooth stack transmission etc between scalp voltage and data availability anyway. Reliable low-variability latency is key, and LSL will do a good job regressing this from data even if there is some jitter. I can't imagine it would be enough delay to matter at all; at worst a known constant can be included to adjust offline. So I think if it ends up coming via LSLBridge, I suspect that would work just fine. Thank you again!

@kowalej

This comment has been minimized.

Show comment
Hide comment
@kowalej

kowalej Sep 18, 2018

Owner

I built a new version of the app and would appreciate if you could test it out - it is version 1.0.9.0 in the new TestDist folder.

Some notes:

  • Couldn't get the lsl local_clock working directly from UWP so I went ahead and let the bridge generate the timestamps like I said. Hopefully the data is solid - please let me know.

  • I did do a bit of research on the local_clock source - it comes from the Boost C++ library (Boost chrono) and it seems to basically use QueryPerformanceCounter - which I could access from UWP and just generate my own "LSL" timestamp if needed.

  • Timestamp format defaults to "BlueMuse Unix..." - you can change it from the settings menu which you will clearly see in the interface now. You can also change it by using the command line. You will also notice a "Secondary Timestamp Format" - which gets sent as a data channel if not set to "None". So you can potentially set your "Primary Timestamp" as local_clock and also have reference to the Unix timestamps for comparison.

To use the command line for changing timestamp formats:

Primary
 start bluemuse://setting?key=primary_timestamp_format!value=bluemuse OR lsl OR none

Secondary
 start bluemuse://setting?key=secondary_timestamp_format!value=bluemuse OR lsl OR none

Timestamp settings are remembered by the app, so you should only have to set them once.

Owner

kowalej commented Sep 18, 2018

I built a new version of the app and would appreciate if you could test it out - it is version 1.0.9.0 in the new TestDist folder.

Some notes:

  • Couldn't get the lsl local_clock working directly from UWP so I went ahead and let the bridge generate the timestamps like I said. Hopefully the data is solid - please let me know.

  • I did do a bit of research on the local_clock source - it comes from the Boost C++ library (Boost chrono) and it seems to basically use QueryPerformanceCounter - which I could access from UWP and just generate my own "LSL" timestamp if needed.

  • Timestamp format defaults to "BlueMuse Unix..." - you can change it from the settings menu which you will clearly see in the interface now. You can also change it by using the command line. You will also notice a "Secondary Timestamp Format" - which gets sent as a data channel if not set to "None". So you can potentially set your "Primary Timestamp" as local_clock and also have reference to the Unix timestamps for comparison.

To use the command line for changing timestamp formats:

Primary
 start bluemuse://setting?key=primary_timestamp_format!value=bluemuse OR lsl OR none

Secondary
 start bluemuse://setting?key=secondary_timestamp_format!value=bluemuse OR lsl OR none

Timestamp settings are remembered by the app, so you should only have to set them once.

@scottwatter

This comment has been minimized.

Show comment
Hide comment
@scottwatter

scottwatter Sep 18, 2018

I've spent the morning testing v.1.0.9.0, and things look generally great.

The local_clock times are as expected, and work well with LSL time synchronization / import functions to raw XDF format structs in Matlab, as well as EEGLAB importing of Muse + timestamp data.

PROBLEM: when the secondary timestamps are sent as an extra data channel, it looks like they are only being sent in single precision (8 characters) -- this BREAKS the usefulness of the secondary timestamp data given the large positive offsets and the need for at least millisecond resolution. This happens with both UTC and local_clock set as secondary - so you end up with a single value for UTC time in the whole dataset, and depending on the local_clock time, get only hundredths or tenths of seconds as the smallest decimal unit. This isn't a dealbreaker for the LSL solution (the LSL timestamps work great!), but the Secondary Timestamp Format data are broken as is.

The timing jitter of the LSLbridge (from unadjusted, NOT un-jittered raw data) looks to be about +/- 9 ms per 12-sample data block (blocks of 12 samples regularly alternate 9ms ahead and then 9 ms behind the intervening calculated sample rates). The variability if this pattern is +/- 1 ms, and so the regularity of this seems like it will be easily smoothed out via the standard LSL import process (and seems to do so fine). These kind of delays are not unique to the LSL bridge I imagine - if there is even 10 ms of delay between your bluetooth sampling and then the LSL bridge sending this data out, that is not going to be a big deal for anyone (other than rare cases wanting to measure some absolute neurological timing, which is very rare). A low-variability offset is fine, and it looks (from basic testing) like this will do fine here.

Related, and not at all vital, but a small feature request for those wanting to know what this delay might be in their system: if you called UTC and the approximation of local_time from your Boost C++ source before the LSLbridge local_time call, it might give an easy way to directly measure the delay and variability of all this. I don't mean to cause extra work though - given LSL will de-jitter and align everything, so long as these delays are small and regular, it will work fine for LSL purposes as is. (Note the single vs double precision error in the secondary time data though!, described above).

Many thanks again for this - we will keep testing, and can retest the fix for the secondary time data if you'd like.

scottwatter commented Sep 18, 2018

I've spent the morning testing v.1.0.9.0, and things look generally great.

The local_clock times are as expected, and work well with LSL time synchronization / import functions to raw XDF format structs in Matlab, as well as EEGLAB importing of Muse + timestamp data.

PROBLEM: when the secondary timestamps are sent as an extra data channel, it looks like they are only being sent in single precision (8 characters) -- this BREAKS the usefulness of the secondary timestamp data given the large positive offsets and the need for at least millisecond resolution. This happens with both UTC and local_clock set as secondary - so you end up with a single value for UTC time in the whole dataset, and depending on the local_clock time, get only hundredths or tenths of seconds as the smallest decimal unit. This isn't a dealbreaker for the LSL solution (the LSL timestamps work great!), but the Secondary Timestamp Format data are broken as is.

The timing jitter of the LSLbridge (from unadjusted, NOT un-jittered raw data) looks to be about +/- 9 ms per 12-sample data block (blocks of 12 samples regularly alternate 9ms ahead and then 9 ms behind the intervening calculated sample rates). The variability if this pattern is +/- 1 ms, and so the regularity of this seems like it will be easily smoothed out via the standard LSL import process (and seems to do so fine). These kind of delays are not unique to the LSL bridge I imagine - if there is even 10 ms of delay between your bluetooth sampling and then the LSL bridge sending this data out, that is not going to be a big deal for anyone (other than rare cases wanting to measure some absolute neurological timing, which is very rare). A low-variability offset is fine, and it looks (from basic testing) like this will do fine here.

Related, and not at all vital, but a small feature request for those wanting to know what this delay might be in their system: if you called UTC and the approximation of local_time from your Boost C++ source before the LSLbridge local_time call, it might give an easy way to directly measure the delay and variability of all this. I don't mean to cause extra work though - given LSL will de-jitter and align everything, so long as these delays are small and regular, it will work fine for LSL purposes as is. (Note the single vs double precision error in the secondary time data though!, described above).

Many thanks again for this - we will keep testing, and can retest the fix for the secondary time data if you'd like.

@kowalej

This comment has been minimized.

Show comment
Hide comment
@kowalej

kowalej Sep 18, 2018

Owner

Thanks for the thorough testing. I will probably opt to make another timestamp format that uses the same underlying timer and format as lsl local_clock, called from the UWP app. So I'll have an LSL native timestamp which you are using niw and an LSL BlueMuse version. The user can choose which one they want but theoretically they should both have same result with the BlueMuse version hopefully producing less jitter.

As for the secondary timestamp issue, do you think it has to do with the fact that I set the "unit" metadata element to seconds for that channel? I'm not sure why else it would have cut off data... you are using LabRecorder to pickup the stream yes?

Owner

kowalej commented Sep 18, 2018

Thanks for the thorough testing. I will probably opt to make another timestamp format that uses the same underlying timer and format as lsl local_clock, called from the UWP app. So I'll have an LSL native timestamp which you are using niw and an LSL BlueMuse version. The user can choose which one they want but theoretically they should both have same result with the BlueMuse version hopefully producing less jitter.

As for the secondary timestamp issue, do you think it has to do with the fact that I set the "unit" metadata element to seconds for that channel? I'm not sure why else it would have cut off data... you are using LabRecorder to pickup the stream yes?

@scottwatter

This comment has been minimized.

Show comment
Hide comment
@scottwatter

scottwatter Sep 18, 2018

The alternative local timestamp you suggest sounds like a great idea. I would be happy to use the LSL native one as is, and so this is likely to only improve things. That all sounds great.

I don't know re the "unit" metadata. Yes, we are using LabRecorder on the same local machine as BlueMuse, very standard LSL setup. It is possible that you are properly sending double-precision data for the secondary timestamp, and it is being written to XDF file just fine, but then the import process with xdf_load to Matlab is treating that channel of data as single precision and truncating it on import. (I didn't check this today; I didn't see anything in the load_xdf documentation that lets you alter this; I am not sure how you'd check other than trying to parse the raw hex data in the XDF file, if you don't trust the xdf_import function).

I don't have any good guesses here, other than to note it's 8 vs 16 characters of precision.

scottwatter commented Sep 18, 2018

The alternative local timestamp you suggest sounds like a great idea. I would be happy to use the LSL native one as is, and so this is likely to only improve things. That all sounds great.

I don't know re the "unit" metadata. Yes, we are using LabRecorder on the same local machine as BlueMuse, very standard LSL setup. It is possible that you are properly sending double-precision data for the secondary timestamp, and it is being written to XDF file just fine, but then the import process with xdf_load to Matlab is treating that channel of data as single precision and truncating it on import. (I didn't check this today; I didn't see anything in the load_xdf documentation that lets you alter this; I am not sure how you'd check other than trying to parse the raw hex data in the XDF file, if you don't trust the xdf_import function).

I don't have any good guesses here, other than to note it's 8 vs 16 characters of precision.

@scottwatter

This comment has been minimized.

Show comment
Hide comment
@scottwatter

scottwatter Sep 18, 2018

WAIT! - yes, I see it.

The channel_format header for the whole EEG set of data is float32 (so single precision). The secondary timestamp data are being written as this, given the channel_format definition is common to all channels (as far as I know).

Maybe you need to record all EEG in float64 / double precision in order to get the precision for timestamps recorded in an EEG channel like this. (I don't know if you can specifiy a single channel within a set at different precision for LSL?)

scottwatter commented Sep 18, 2018

WAIT! - yes, I see it.

The channel_format header for the whole EEG set of data is float32 (so single precision). The secondary timestamp data are being written as this, given the channel_format definition is common to all channels (as far as I know).

Maybe you need to record all EEG in float64 / double precision in order to get the precision for timestamps recorded in an EEG channel like this. (I don't know if you can specifiy a single channel within a set at different precision for LSL?)

@kowalej

This comment has been minimized.

Show comment
Hide comment
@kowalej

kowalej Sep 24, 2018

Owner

I have cleaned up version 1.0.9.0 - you can install it from the /dist folder. I have implemented the LSL timestamp directly from BlueMuse but you can also choose to have it generated from the bridge - which should produce the same values. Also, I figured out that the stream outlet was indeed setting the data type to float32 instead of double, so I have fixed that too - you should now see the secondary timestamp values with the correct precision.

Let me know if you have any issues with 1.0.9.0.

Owner

kowalej commented Sep 24, 2018

I have cleaned up version 1.0.9.0 - you can install it from the /dist folder. I have implemented the LSL timestamp directly from BlueMuse but you can also choose to have it generated from the bridge - which should produce the same values. Also, I figured out that the stream outlet was indeed setting the data type to float32 instead of double, so I have fixed that too - you should now see the secondary timestamp values with the correct precision.

Let me know if you have any issues with 1.0.9.0.

@scottwatter

This comment has been minimized.

Show comment
Hide comment
@scottwatter

scottwatter Sep 26, 2018

Hi again,

We have tested the "official" new 1.0.9.0 release. The timestamps look good.

HOWEVER, (and I'm sorry to say, I don't mean to be making more work here...), but I think changing the general datatype for the Muse data to double64 (instead of float32) is going to cause a lot of trouble for general LSL processing.

A lot of LSL code is expecting almost all streamed data (EEG data, eye tracking, motion capture, other physiology, etc) to be float32. This isn't ideal, and in principle LSL should deal with whatever you give it - but I think some typical things people use with LSL will break with this change.

Examples:

  • LabRecorder is fine, and will properly record whatever you give it (no problem there)
  • StreamViewer (compiled version for easily visualizing data streams without Matlab) breaks with double data (fine with float32)
  • OpenVibe can only deal with LSL data in float32 format
  • I think parts of MobiLab have been unhappy with non-float32 data, but we have not tested this with current releases
  • and I suspect more

The change of the whole Muse datatype to double (away from float32/single precision) solves the secondary timestamp problem. For people not wanting any other integration with LSL, it probably doesn't matter. I suspect v.1.0.9.0 will be much LESS usable for general LSL users because of this change, though.... (and those users generally won't care about having a secondary UTC time code anyway).

For us testing right now, we can now record with good LSL timestamps, but our standard quick visualization system is broken given the data type change.

SUGGESTION: This may be a bad/suboptimal suggestion, but one way to easily include secondary timestamp data as an extra EEG channel would be to just split the double precision value over two channels, and keep everything in float 32. The secondary timestamps would be very easy to recover if you wanted/needed them, and existing LSL workflows would not be affected. (I realize this seems an unappealing solution, but it would be easy to do, and would work pretty well in practice I think.). I think it would be absolutely better than changing the EEG datatype to double...

ALSO (and not sure how critical this is), the sampling rate reported by BlueMuse is sometimes multiples of 256 - 1024 half the time, sometimes 512, half the time 256. I think it is mostly when UTC time is selected as one or the other options, but we haven't gone through all the variations to double check.

Thanks again for your help with all this. Right now, we will continue to use the TEST version of v.1.0.9.0, with just the LSL bridge timestamp option selected, as this will give us LSL timestamps and float32 EEG data.

I hope you would consider revising the EEG data type back to float32 - I think it will help most people using LSL methods.

many thanks again for all this,
Scott

scottwatter commented Sep 26, 2018

Hi again,

We have tested the "official" new 1.0.9.0 release. The timestamps look good.

HOWEVER, (and I'm sorry to say, I don't mean to be making more work here...), but I think changing the general datatype for the Muse data to double64 (instead of float32) is going to cause a lot of trouble for general LSL processing.

A lot of LSL code is expecting almost all streamed data (EEG data, eye tracking, motion capture, other physiology, etc) to be float32. This isn't ideal, and in principle LSL should deal with whatever you give it - but I think some typical things people use with LSL will break with this change.

Examples:

  • LabRecorder is fine, and will properly record whatever you give it (no problem there)
  • StreamViewer (compiled version for easily visualizing data streams without Matlab) breaks with double data (fine with float32)
  • OpenVibe can only deal with LSL data in float32 format
  • I think parts of MobiLab have been unhappy with non-float32 data, but we have not tested this with current releases
  • and I suspect more

The change of the whole Muse datatype to double (away from float32/single precision) solves the secondary timestamp problem. For people not wanting any other integration with LSL, it probably doesn't matter. I suspect v.1.0.9.0 will be much LESS usable for general LSL users because of this change, though.... (and those users generally won't care about having a secondary UTC time code anyway).

For us testing right now, we can now record with good LSL timestamps, but our standard quick visualization system is broken given the data type change.

SUGGESTION: This may be a bad/suboptimal suggestion, but one way to easily include secondary timestamp data as an extra EEG channel would be to just split the double precision value over two channels, and keep everything in float 32. The secondary timestamps would be very easy to recover if you wanted/needed them, and existing LSL workflows would not be affected. (I realize this seems an unappealing solution, but it would be easy to do, and would work pretty well in practice I think.). I think it would be absolutely better than changing the EEG datatype to double...

ALSO (and not sure how critical this is), the sampling rate reported by BlueMuse is sometimes multiples of 256 - 1024 half the time, sometimes 512, half the time 256. I think it is mostly when UTC time is selected as one or the other options, but we haven't gone through all the variations to double check.

Thanks again for your help with all this. Right now, we will continue to use the TEST version of v.1.0.9.0, with just the LSL bridge timestamp option selected, as this will give us LSL timestamps and float32 EEG data.

I hope you would consider revising the EEG data type back to float32 - I think it will help most people using LSL methods.

many thanks again for all this,
Scott

@kowalej

This comment has been minimized.

Show comment
Hide comment
@kowalej

kowalej Sep 26, 2018

Owner

Wow I had no idea all these LSL related applications had such issue with doubles... I only ever use LSL with custom scripts, so I never had such issues and I apologize for making a potentially breaking change.

I don't won't to compromise at this point on the features I have added, so I will have to add another setting to the app which allows the user to choose channel data type. The default will be float32. If the user has selected float32 and also wants secondary timestamp - it will be split into 2 columns as suggested. If the user selects double as the channel data format - they will get the timestamp in 1 column as is the case right now.

Owner

kowalej commented Sep 26, 2018

Wow I had no idea all these LSL related applications had such issue with doubles... I only ever use LSL with custom scripts, so I never had such issues and I apologize for making a potentially breaking change.

I don't won't to compromise at this point on the features I have added, so I will have to add another setting to the app which allows the user to choose channel data type. The default will be float32. If the user has selected float32 and also wants secondary timestamp - it will be split into 2 columns as suggested. If the user selects double as the channel data format - they will get the timestamp in 1 column as is the case right now.

@scottwatter

This comment has been minimized.

Show comment
Hide comment
@scottwatter

scottwatter Sep 26, 2018

That sounds like a great solution. And I agree - it's surprising that regularly used parts of the broader LSL ecosystem have some of these issues baked in (though I suppose it's rarely discovered given how common using single precision for most datatypes has become there). Anyway - thanks again for this.

scottwatter commented Sep 26, 2018

That sounds like a great solution. And I agree - it's surprising that regularly used parts of the broader LSL ecosystem have some of these issues baked in (though I suppose it's rarely discovered given how common using single precision for most datatypes has become there). Anyway - thanks again for this.

@kowalej

This comment has been minimized.

Show comment
Hide comment
@kowalej

kowalej Sep 27, 2018

Owner

I have built version 1.0.10 - test, in the test dist folder. Please give it a shot - you can set data precision, via settings menu or command line:

start bluemuse://setting?key=channel_data_type!value=float32 start bluemuse://setting?key=channel_data_type!value=double64

with float32 the timestamp is split into two columns (Secondary Timestamp (Base), Secondary Timestamp (Remainder)) - adding these two columns should recover the full timestamp. Let me know if you have any issues.

Also regarding your comment from earlier about fluctuating data rates - the only time I have seen something like this is when I pause the app during a debugging session then resume shortly after - since there is some queued up data to be sent over to the bridge the rate will go high for a couple seconds then stabilize. What I can presume is happening on your machine is the app is either getting slowed down at times, maybe due to low memory or CPU on your system, or you are getting a sub-optimal Bluetooth connection with the Muse and your card.

Owner

kowalej commented Sep 27, 2018

I have built version 1.0.10 - test, in the test dist folder. Please give it a shot - you can set data precision, via settings menu or command line:

start bluemuse://setting?key=channel_data_type!value=float32 start bluemuse://setting?key=channel_data_type!value=double64

with float32 the timestamp is split into two columns (Secondary Timestamp (Base), Secondary Timestamp (Remainder)) - adding these two columns should recover the full timestamp. Let me know if you have any issues.

Also regarding your comment from earlier about fluctuating data rates - the only time I have seen something like this is when I pause the app during a debugging session then resume shortly after - since there is some queued up data to be sent over to the bridge the rate will go high for a couple seconds then stabilize. What I can presume is happening on your machine is the app is either getting slowed down at times, maybe due to low memory or CPU on your system, or you are getting a sub-optimal Bluetooth connection with the Muse and your card.

@scottwatter

This comment has been minimized.

Show comment
Hide comment
@scottwatter

scottwatter Sep 27, 2018

Looking at v.1.0.10.0_TEST, everything looks great. Compatibility with existing LSL stuff looks to work as expected, timestamps seem in order, we can collect multi-stream data as expected, etc. The additional channels with split timestamps works fine. Raw un-adjusted (non-LSL-aligned) timestamps look quite good for both LSL methods, with expected offsets due to bluetooth packet time jitter every frame of 12 samples, but these differences are themselves pretty regular. The raw BlueMuse LSL timing (unadjusted) looks a little lower-variance than the LSL bridge, but they both look quite good. I think this all looks fantastic!

Thank you SO much for all your work and help with this. I am sure the LSL+muse community will find this incredibly helpful. It's very much appreciated - thank you!

scottwatter commented Sep 27, 2018

Looking at v.1.0.10.0_TEST, everything looks great. Compatibility with existing LSL stuff looks to work as expected, timestamps seem in order, we can collect multi-stream data as expected, etc. The additional channels with split timestamps works fine. Raw un-adjusted (non-LSL-aligned) timestamps look quite good for both LSL methods, with expected offsets due to bluetooth packet time jitter every frame of 12 samples, but these differences are themselves pretty regular. The raw BlueMuse LSL timing (unadjusted) looks a little lower-variance than the LSL bridge, but they both look quite good. I think this all looks fantastic!

Thank you SO much for all your work and help with this. I am sure the LSL+muse community will find this incredibly helpful. It's very much appreciated - thank you!

@kowalej kowalej closed this Oct 3, 2018

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment