Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Normalisation of test and training data #23

Closed
plankthom opened this issue Jun 14, 2019 · 3 comments
Closed

Normalisation of test and training data #23

plankthom opened this issue Jun 14, 2019 · 3 comments

Comments

@plankthom
Copy link

plankthom commented Jun 14, 2019

First, many thanks for your insightful work ...

I however have an issue when loading the training and test data sets: in most cases they seem to be normalised to [-1,1] independently, and I was wondering whether this would not make the trained model inaccurate.

Eg. a density plot for the channel E4 :
distplot-E-4

Or did i miss something else?

@khundman
Copy link
Owner

Thanks for the comment, I'm looking into this. This was an issue I had found before releasing the data and thought was corrected. If there are other suspicious channels you have noticed it would be helpful if you could provide them.

This one looks like what you have described, but there are instances where channel behavior can change abruptly due to commanding. I will follow up.

@khundman
Copy link
Owner

@plankthom Following up on this - it is an error that won't be corrected. Unfortunately I no longer have access to the raw data and therefore can't rescale. I don't think it is material to the results or methods (in a sense it actually demonstrates the robustness of the overall approach). Thanks again for the note.

@mistycheney
Copy link

Is it possible to provide the list of sequences that are confirmed to have this issue, so we can exclude them from our experiments?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants