Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Python updates #53

Merged
merged 30 commits into from
Dec 23, 2012
Merged

Python updates #53

merged 30 commits into from
Dec 23, 2012

Conversation

qdot
Copy link
Member

@qdot qdot commented Dec 23, 2012

No description provided.

William Schumacher and others added 30 commits December 10, 2012 19:32
This reverts commit 59b14aa.
…evice,

also thought about how multiple adapters might need to be implemented
Gevent, PyCrypto and realpath are now required for the Python library.
realpath is used to locate the serial number automatically in linux.
Crypto functions are now handled in a seperate greenlet and use the PyCrypto AES.
Output functions are now handled in a seperate greenlet, output can also be disabled.
Output now displays all sensor readings, packets recieved and packets decrypted.
On a Raspberry PI I can get ~130 packets per sec, decrypted and stored. I'm not sure if this is good or not.

Removed "extra" code, let me know if I broke something.
…working on it.

Also added support for unknown quality masks, we still have to add some bit masks in as there
are several missing. Will also work on this soon as well.
Removed TP7 and TP8, the do not report values.
It also updates at 128frame per sec as it should.
…06, Y should be 105.

Fixed up the render.py example. It is real-time on my computer. I did disable the quality,
as it was broken and I haven't fixed it yet. You could probably write this same
 application just using the current sensor data rather than storing the packets and popping
them.
Made the example in a code box.
Added an example.py with the example from the README
Cleaned up the emotiv.py, fixed the running as __main__ setup. Turned off output by default.
… slightly modified.

read.py is being deleted as a result.
… user

has mostly because I don't have access to the dev headset.
qdot added a commit that referenced this pull request Dec 23, 2012
@qdot qdot merged commit ccbf795 into openyou:master Dec 23, 2012
@ozancaglayan
Copy link

Hi,

I'm also writing a Python library for accessing Emotiv headsets. It is based on the reverse engineered protocol emokit provides. It doesn't use hidapi but uses pyusb 1.0 to directly communicate with the device. It can autodetect the dongles based on some heuristics without giving explicit VID/PID. It is pretty immature but works. I also tested in on RaspberryPi ARM and been able to plot gyro X,Y with matplotlib.

I just wanted to let you know about it, here is the project:

https://github.com/ozancaglayan/EmotivBCI

Thanks

@ghost
Copy link

ghost commented Dec 26, 2012

Very nice, I may use some of that code after testing it into this repo if you don't mind.

From: Ozan Çağlayan [mailto:notifications@github.com]
Sent: Wednesday, December 26, 2012 8:54 AM
To: openyou/emokit
Subject: Re: [emokit] Python updates (#53)

Hi,

I'm also writing a Python library for accessing Emotiv headsets. It is based on the reverse engineered protocol emokit provides. It doesn't use hidapi but uses pyusb 1.0 to directly communicate with the device. It can autodetect the dongles based on some heuristics without giving explicit VID/PID. It is pretty immature but works. I also tested in on RaspberryPi ARM and been able to plot gyro X,Y with matplotlib.

I just wanted to let you know about it, here is the project:

https://github.com/ozancaglayan/EmotivBCI

Thanks


Reply to this email directly or view it on GitHubhttps://github.com//pull/53#issuecomment-11687572.

@qdot
Copy link
Member Author

qdot commented Dec 26, 2012

Cool! I'll try to add a link to the README soon.

@qdot
Copy link
Member Author

qdot commented Dec 26, 2012

Er, link to your project /in/ the emokit README, I mean. :)

@ozancaglayan
Copy link

Hi,

You can always use the code in my repo. Currently there's no multithreading or cooperative multithreading like you implemented with gevent but I will try every little piece of optimization as the code will run on the raspberry PI which is quite slow.

@ghost
Copy link

ghost commented Dec 27, 2012

Yeah, my code runs pretty well on the RPi. It is able to keep up with 128 frames/sec. I believe I just figured out that the last bit is the FFT data also. Pretty much same thing as the gyros value = data[31] - 104. I believe the counter represents the frequency repeating or counts up one way and down the other. The FFT implementation that I have looks like I may have it reversed or they have some shifting going on. It looks to be what the bit is used for though.

I am almost done with the first incantation of pattern recognition. Although, I'm not sure if I should release it if I get it to actually work. I read the thread about hacking headsets on their forums today. I bought this product with the intent to see it's capabilities/quality. Then if it could accomplish what I wanted it to do I would buy the dev license, but from what I read on the forums that doesn't give you raw access to the data either. So what's the point? My headset is falling apart after only a few weeks and I have been gentle.

If I make an application that works as good or better than their control panel apps, should I release it? Do you think they would go after me for doing so?

From: Ozan Çağlayan [mailto:notifications@github.com]
Sent: Wednesday, December 26, 2012 5:54 PM
To: openyou/emokit
Cc: Bill Schumacher
Subject: Re: [emokit] Python updates (#53)

Hi,

You can always use the code in my repo. Currently there's no multithreading or cooperative multithreading like you implemented with gevent but I will try every little piece of optimization as the code will run on the raspberry PI which is quite slow.


Reply to this email directly or view it on GitHubhttps://github.com//pull/53#issuecomment-11697118.

@ozancaglayan
Copy link

Well as far as I know FFT is not computed on hardware, it's part of their SDK and control panel but I am not totally sure.

Education and Research SDK's give you raw data. I don't remember the Dev one. But on the dongle site they all are the same as you can read raw EEG from your consumer kit, right?

I am really not sure about the legal stuff you've asked but apparently they didn't go after emokit people. And If they decide to do that, releasing it as a GUI application or a Python library won't make any difference. They will only care about the reverse-engineered protocol spec which is the only thing that is needed to implement what are we doing.

@ghost
Copy link

ghost commented Dec 27, 2012

I only thought of that because of this response to a question in the forums:

At 128Hz you are absolutely limited to a maximum of 64Hz as the maximum frequency you can measure. Higher frequencies will also be measurable in a BAD way - by wrapping around and aliasing back into the 0-64Hz range, so for example if you have a lot of electrical mains interference in a 60Hz country, you will see a spike at 60Hz, another at 8Hz ( 128Hz - 120Hz, first harmonic ), another at 52Hz ( 180 - 128 Hz, second harmonic) and so on. Luckily our filtering fixes most of this before we convert to 128Hz, but the cost is the effective bandwidth of EPOC runs up to about 43-45Hz because of the 50 + 60Hz notch filtering we apply.

Which leads me to believe it is being transmitted from the headset. If the value was calculated from the other sensor values after the fact they probably wouldn’t be limited by 128hz.

From: Ozan Çağlayan [mailto:notifications@github.com]
Sent: Wednesday, December 26, 2012 7:00 PM
To: openyou/emokit
Cc: Bill Schumacher
Subject: Re: [emokit] Python updates (#53)

Well as far as I know FFT is not computed on hardware, it's part of their SDK and control panel but I am not totally sure.

Education and Research SDK's give you raw data. I don't remember the Dev one. But on the dongle site they all are the same as you can read raw EEG from your consumer kit, right?

I am really not sure about the legal stuff you've asked but apparently they didn't go after emokit people. And If they decide to do that, releasing it as a GUI application or a Python library won't make any difference. They will only care about the reverse-engineered protocol spec which is the only thing that is needed to implement what are we doing.


Reply to this email directly or view it on GitHubhttps://github.com//pull/53#issuecomment-11697963.

@ghost
Copy link

ghost commented Dec 27, 2012

Maybe you are right I also see this in the same post.
The low frequency component is an artefact of the FFT sample length. FFT assumes a continuously repeating version of your sample, so if there are any bumps in your background (for example) these will show up as oscillations at 1/sample length. If you are interested in lower frequencies, you should use a sample length many times greater than the period of the minimum frequency - which also means you should discount frequencies with periods longer than a fraction of your sample length. You won't see useful 1Hz output if your sample is 1 second long! It probably starts to get meaningful around 5-8Hz. Use longer samples if you want to see lower frequencies.

I also note you aren't using a sample window function. The FFT stitches your sample together end to end and if there is a numerical mismatch between the last sample and the first one, the FFT sees a step function at the join, and I don't need to tell you about the effects of a step function in a fourier transform... You should apply a tapered window function which scales the ends of the sample to zero and does not rescale the middle of the sample. Try Hanning or Hamming windows - these are designed to taper smoothly and with minimal effect on higher frequencies.

From: Ozan Çağlayan [mailto:notifications@github.com]
Sent: Wednesday, December 26, 2012 7:00 PM
To: openyou/emokit
Cc: Bill Schumacher
Subject: Re: [emokit] Python updates (#53)

Well as far as I know FFT is not computed on hardware, it's part of their SDK and control panel but I am not totally sure.

Education and Research SDK's give you raw data. I don't remember the Dev one. But on the dongle site they all are the same as you can read raw EEG from your consumer kit, right?

I am really not sure about the legal stuff you've asked but apparently they didn't go after emokit people. And If they decide to do that, releasing it as a GUI application or a Python library won't make any difference. They will only care about the reverse-engineered protocol spec which is the only thing that is needed to implement what are we doing.


Reply to this email directly or view it on GitHubhttps://github.com//pull/53#issuecomment-11697963.

@ozancaglayan
Copy link

@bschumacher are you able to get contact quality readings with your code? I always get 0 with my code.

@ozancaglayan
Copy link

@bschumacher ah I see. If you don't put an electrode in the P3 position (Not annotated as a data channel in anywhere) you do not get any data/quality data. It seems that it's where the signals are referenced.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants