Skip to content

Network Events receive TTL event#277

Merged
aacuevas merged 9 commits intoopen-ephys:developmentfrom
tne-lab:network-events-ttl-event
Jan 22, 2019
Merged

Network Events receive TTL event#277
aacuevas merged 9 commits intoopen-ephys:developmentfrom
tne-lab:network-events-ttl-event

Conversation

@markschatza
Copy link
Copy Markdown
Contributor

Hello! I added the ability to send a TTL event through the Network Events plugin. I believe this could be pretty useful for some uses. Let me know if you have any comments!

Overview of the implementation.

Added a TTL event channel

 EventChannel* TTLchan = new EventChannel(EventChannel::TTL, 8, 1, timestamp, this);
 TTLchan->setName("Network Events output");
 TTLchan->setDescription("Triggers whenever \"TTL\" is received on the port.");
 TTLchan->setIdentifier("external.network.ttl");
 eventChannelArray.add(TTLchan);
 TTLChannel = TTLchan;

Created a TTL queue

Next I created a queue of type stringTTL. stringTTL is a new struct that holds a boolean for on/off and the channel number. When messages are received they are added to the queue and handled nearly identical to how text events are currently handled.

    struct StringTTL
    {
        bool onOff;
        int eventChannel;
    };
    std::queue<StringTTL> TTLQueue;
    CriticalSection TTLqueueLock;

One thing to notice is that we don't add TTL events to the queue unless acquisition is on so we don't backlog TTL events. This may be something to look into for text events as well.

Look for "TTL" in handleSpecialMessages()

Currently I have the plugin handle TTL events similar to how startRecord is handled with optional arguments.

The usage is a string in the following format "TTL Channel=1 on=1". This is parsed into a stringTTL and pushed to the queue.

Trigger Event function

After the event is popped from the queue it is added with the following function.

void NetworkEvents::triggerTTLEvent(StringTTL TTLmsg, juce::int64 timestamp)
{
    juce::uint8 ttlData = TTLmsg.onOff << TTLmsg.eventChannel;
    TTLEventPtr event = TTLEvent::createTTLEvent(TTLChannel, timestamp, &ttlData, sizeof(juce::uint8), TTLmsg.eventChannel);
    addEvent(TTLChannel, event, 0);
}

Known Issues

When TTL events are sent this way they don't show up LFP Viewer. This is an issue bigger than my changes. Read below for details, but we're unsure whether this is a large enough problem to be considered.

Here is the reason (and potential fix) according to @ethanbb

So the LFP viewer uses the number of samples on the processor that created the event channel to know how many samples to write to the buffer that controls the display of events on the screen. If a source node generates both TTL events and continuous data, has 100 samples of continuous data in a given buffer, and a TTL event on channel 0 turns on on sample 50, for example, then it will write 50 0s and then 50 1s to the event buffer, and then will display it accordingly.

Filters like the Crossing Detector that receive continuous data and add TTL events to it can get the number of available timepoints using getNumSamples, add events with sample numbers in that range, and report to the LFP viewer which channel the events correspond to by adding the metadata with id source.channel.identifier.full (which the Crossing Detector does). This takes the processor, subprocessor, and channel id of the source data.

The problem is, since Network Events is a source, it doesn't have any way of getting information about the processor that generates the timestamps it's using, so it can't add that metadata. If we could get the processor/subprocessor id of the timestamp source (the one CoreServices::getGlobalTimestamp uses, if any), then it could. So maybe a new function can be added to CoreServices to do that.

@aacuevas
Copy link
Copy Markdown
Collaborator

This seems useful! We'll take a good look at it, but seems good at a first glance.

Regarding the proposed solution for the viewer issue, that would work as a temporary workaround, but I believe we should try to make drawing on the viewer more buffer-agnostic somehow, to account to situations where, for example, the global timestamp source weren't plotted into the viewer but some ttls were.

void NetworkEvents::createEventChannels()
{
EventChannel* chan = new EventChannel(EventChannel::TEXT, 1, MAX_MESSAGE_LENGTH, CoreServices::getGlobalSampleRate(), this);
juce::int64 timestamp = CoreServices::getGlobalSampleRate();
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is not a timestamp but a sample rate. The name is confusing but, more importantly, it should be a float, not a int64

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh yep, misread the function. Fixed!

@ethanbb
Copy link
Copy Markdown
Contributor

ethanbb commented Dec 22, 2018

@aacuevas Agreed about the LFP Viewer. It should know how many samples are in the buffer it's currently drawing when it calls checkForEvents. Then if an event channel's source doesn't produce any samples, it can just use that sample count as a default and line the events up to that. Although I'm not sure if or how we would want to show the events at all if there is a source, but from a subprocessor with a different sample rate, since you've been saying you want to move away from mixing different subprocessors.

Actually this gets into a flaw with the LFP viewer currently, which is that, when deciding which channels to draw, it considers all channels with the same subprocessor index as coming from the same subprocessor, when in fact they could be from completely different sources. I believe it can crash if you have two sources merged for this reason; it doesn't let you pick one or the other to display, since they are both "subprocessor 0."

This should all probably be discussed in a separate issue, though!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants